Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Add Identity Provider | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-identity-provider.md | You can configure Azure AD B2C to allow users to sign in to your application wit With external identity provider federation, you can offer your consumers the ability to sign in with their existing social or enterprise accounts, without having to create a new account just for your application. -On the sign-up or sign-in page, Azure AD B2C presents a list of external identity providers the user can choose for sign-in. Once they select one of the external identity providers, they're taken (redirected) to the selected provider's website to complete the sign-in process. After the user successfully signs in, they're returned to Azure AD B2C for authentication of the account in your application. +On the sign-up or sign-in page, Azure AD B2C presents a list of external identity providers the user can choose for sign-in. Once a user selects an external identity provider, they're redirected to the selected provider's website to complete their sign-in. After they successfully sign in, they're returned to Azure AD B2C for authentication with your application.  |
active-directory-b2c | Identity Provider Azure Ad Single Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-azure-ad-single-tenant.md | To enable sign-in for users with an Azure AD account from a specific Azure AD or 1. Select **Certificates & secrets**, and then select **New client secret**. 1. Enter a **Description** for the secret, select an expiration, and then select **Add**. Record the **Value** of the secret for use in a later step. -### Configuring optional claims --If you want to get the `family_name` and `given_name` claims from Azure AD, you can configure optional claims for your application in the Azure portal UI or application manifest. For more information, see [How to provide optional claims to your Azure AD app](../active-directory/develop/active-directory-optional-claims.md). --1. Sign in to the [Azure portal](https://portal.azure.com) using your organizational Azure AD tenant. Or if you're already signed in, make sure you're using the directory that contains your organizational Azure AD tenant (for example, Contoso): - 1. Select the **Directories + subscriptions** icon in the portal toolbar. - 2. On the **Portal settings | Directories + subscriptions** page, find your Azure AD directory in the **Directory name** list, and then select **Switch**. -1. In the Azure portal, search for and select **Azure Active Directory**. -1. In the left menu, under **Manage**, select **App registrations**. -1. Select the application you want to configure optional claims for in the list, such as `Azure AD B2C App`. -1. From the **Manage** section, select **Token configuration**. -1. Select **Add optional claim**. -1. For the **Token type**, select **ID**. -1. Select the optional claims to add, `family_name` and `given_name`. -1. Select **Add**. If **Turn on the Microsoft Graph profile permission (required for claims to appear in token)** appears, enable it, and then select **Add** again. --## [Optional] Verify your app authenticity --[Publisher verification](../active-directory/develop/publisher-verification-overview.md) helps your users understand the authenticity of the app you [registered](#register-an-azure-ad-app). A verified app means that the publisher of the app has [verified](/partner-center/verification-responses) their identity using their Microsoft Partner Network (MPN). Learn how to [mark your app as publisher verified](../active-directory/develop/mark-app-as-publisher-verified.md). - ::: zone pivot="b2c-user-flow" ## Configure Azure AD as an identity provider If the sign-in process is successful, your browser is redirected to `https://jwt ::: zone-end +### [Optional] Configuring optional claims ++If you want to get the `family_name` and `given_name` claims from Azure AD, you can configure optional claims for your application in the Azure portal UI or application manifest. For more information, see [How to provide optional claims to your Azure AD app](../active-directory/develop/active-directory-optional-claims.md). ++1. Sign in to the [Azure portal](https://portal.azure.com) using your organizational Azure AD tenant. Or if you're already signed in, make sure you're using the directory that contains your organizational Azure AD tenant (for example, Contoso): + 1. Select the **Directories + subscriptions** icon in the portal toolbar. + 2. On the **Portal settings | Directories + subscriptions** page, find your Azure AD directory in the **Directory name** list, and then select **Switch**. +1. In the Azure portal, search for and select **Azure Active Directory**. +1. In the left menu, under **Manage**, select **App registrations**. +1. Select the application you want to configure optional claims for in the list, such as `Azure AD B2C App`. +1. From the **Manage** section, select **Token configuration**. +1. Select **Add optional claim**. +1. For the **Token type**, select **ID**. +1. Select the optional claims to add, `family_name` and `given_name`. +1. Select **Add**. If **Turn on the Microsoft Graph profile permission (required for claims to appear in token)** appears, enable it, and then select **Add** again. ++## [Optional] Verify your app authenticity ++[Publisher verification](../active-directory/develop/publisher-verification-overview.md) helps your users understand the authenticity of the app you [registered](#register-an-azure-ad-app). A verified app means that the publisher of the app has [verified](/partner-center/verification-responses) their identity using their Microsoft Partner Network (MPN). Learn how to [mark your app as publisher verified](../active-directory/develop/mark-app-as-publisher-verified.md). + ## Next steps Learn how to [pass the Azure AD token to your application](idp-pass-through-user-flow.md). |
active-directory-b2c | Identity Provider Generic Saml | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-generic-saml.md | The **OutputClaims** element contains a list of claims returned by the SAML iden In the example above, *Contoso-SAML2* includes the claims returned by a SAML identity provider: -* The **issuerUserId** claim is mapped to the **assertionSubjectName** claim. +* The **assertionSubjectName** claim is mapped to the **issuerUserId** claim. * The **first_name** claim is mapped to the **givenName** claim. * The **last_name** claim is mapped to the **surname** claim.-* The **displayName** claim is mapped to the `http://schemas.microsoft.com/identity/claims/displayname` claim. +* The `http://schemas.microsoft.com/identity/claims/displayname` claim is mapped to the **displayName** claim. * The **email** claim without name mapping. The technical profile also returns claims that aren't returned by the identity provider: If the sign-in process is successful, your browser is redirected to `https://jwt - [Configure SAML identity provider options with Azure Active Directory B2C](identity-provider-generic-saml-options.md) |
active-directory-b2c | Sign In Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/sign-in-options.md | |
active-directory | Partner List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/partner-list.md | + + Title: Microsoft Entra Permissions Management partners +description: View current Microsoft Permissions Management partners and their websites. +++++++ Last updated : 01/26/2023++++# Microsoft Entra Permissions Management partners ++Microsoft verified partners can help you onboard Microsoft Entra Permissions Management and run a risk assessment across your entire multicloud environment. ++## Benefits of working with Microsoft verified partners ++* **Product Expertise** ++ Our partners will help you navigate Permissions Management, letting you in on best + practices and guidance to enhance your security strategy. ++* **Risk Assessment** ++ Partners will guide you through the Entra Permissions Management risk assessment and + help you identify top permission risks across your multicloud infrastructure. ++* **Onboarding and Deployment Support** ++ Partners can guide you through the entire onboarding and deployment process for + ermissions Management across AWS, Azure, and GCP. +++## Partner list ++Select a partner from the list provided to begin your Permissions Management risk assessment. Additionally, Microsoft provides a [full list of security partners](https://appsource.microsoft.com/marketplace/consulting-services?exp=ubp8&page=1&product=m365-sa-identity-and-access-management) that can help secure your organization. ++If you're a partner and would like to be considered for the Entra Permissions Management partner list, submit a [request](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbRzw7upfFlddNq4ce6ckvEvhUNzE3V0RQNkpPWjhDSU5FNkk1U1RWUDdDTC4u). ++| EPM partner | Website | +|:-|:--| +| | [Quick Start Programs for Microsoft Cloud Security](https://edgile.com/information-security/quick-start-programs-for-microsoft-cloud-security/) +|  | [Invoke's Entra PM multicloud risk assessment](https://www.invokellc.com/offers/microsoft-entra-permissions-management-multi-cloud-risk-assessment)| +|  | [Permissions Management implementation and remediation](https://oxfordcomputergroup.com/microsoft-entra-permissions-management-implementation/)| +|  | [adaQuest Microsoft Entra Permissions Management Risk Assessment](https://adaquest.com/entra-permission-risk-assessment/) ++## Next steps ++* For an overview of Permissions Management, see [What's Permissions Management?](overview.md) |
active-directory | Concept Continuous Access Evaluation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md | When Conditional Access policy or group membership changes need to be applied to Modern networks often optimize connectivity and network paths for applications differently. This optimization frequently causes variations of the routing and source IP addresses of connections, as seen by your identity provider and resource providers. You may observe this split path or IP address variation in multiple network topologies, including, but not limited to: - On-premises and cloud-based proxies.-- Virtual private network (VPN) implementations, like split tunneling.+- Virtual private network (VPN) implementations, like [split tunneling](/microsoft-365/enterprise/microsoft-365-vpn-implement-split-tunnel). - Software defined wide area network (SD-WAN) deployments. - Load balanced or redundant network egress network topologies, like those using [SNAT](https://wikipedia.org/wiki/Network_address_translation#SNAT). - Branch office deployments that allow direct internet connectivity for specific applications. Modern networks often optimize connectivity and network paths for applications d In addition to IP variations, customers also may employ network solutions and services that: - Use IP addresses that may be shared with other customers. For example, cloud-based proxy services where egress IP addresses are shared between customers.-- Use easily varied or undefinable IP addresses. For example, topologies where there are large, dynamic sets of egress IP addresses used, like large enterprise scenarios or split VPN and local egress network traffic.+- Use easily varied or undefinable IP addresses. For example, topologies where there are large, dynamic sets of egress IP addresses used, like large enterprise scenarios or [split VPN](/microsoft-365/enterprise/microsoft-365-vpn-implement-split-tunnel) and local egress network traffic. ++Networks where egress IP addresses may change frequently or are shared may affect Azure AD Conditional Access and Continues Access Evaluation (CAE). This variability can affect how these features work and their recommended configurations. Split Tunneling may also cause unexpected blocks when an environment is configured using [Split Tunneling VPN Best Practices](/microsoft-365/enterprise/microsoft-365-vpn-implement-split-tunnel). Routing [Optimized IPs](/microsoft-365/enterprise/microsoft-365-vpn-implement-split-tunnel#optimize-ip-address-ranges) through a Trusted IP/VPN may be required to prevent blocks related to "insufficient_claims" or "Instant IP Enforcement check failed". -Networks where egress IP addresses may change frequently or are shared may affect Azure AD Conditional Access and Continues Access Evaluation (CAE). This variability can affect how these features work, and their recommended configurations. The following table summarizes Conditional Access and CAE feature behaviors and recommendations for different types of network deployments: |
active-directory | Msal Shared Devices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-shared-devices.md | -Shared device mode is a feature of Azure Active Directory(Azure AD) that allows you to build and deploy applications that support frontline workers and educational scenarios that require shared Android and iOS devices. +Shared device mode is a feature of Azure Active Directory (Azure AD) that allows you to build and deploy applications that support frontline workers and educational scenarios that require shared Android and iOS devices. > [!IMPORTANT] > Shared device mode for iOS [!INCLUDE [PREVIEW BOILERPLATE](../../../includes/active-directory-develop-preview.md)] Azure AD enables these scenarios with a feature called **shared device mode**. As mentioned, shared device mode is a feature of Azure AD that enables you to: -- Build applications that support frontline workers+- Build applications that support frontline workers. - Deploy devices to frontline workers with apps that support shared device mode. ### Build applications that support frontline workers |
active-directory | Publisher Verification Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/publisher-verification-overview.md | -An app that's publisher verified means that the app's publisher (app developer) has verified the authenticity of their organization with Microsoft. Verifying an app includes using a Microsoft Partner Network (MPN) account that's been [verified](/partner-center/verification-responses) and associating the MPN account with an app registration. +When an app has a verified publisher, this means that the organization that publishes the app has been verified as authentic by Microsoft. Verifying an app includes using a Microsoft Cloud Partner Program (MCPP), formerly known as Microsoft Partner Network (MPN), account that's been [verified](/partner-center/verification-responses) and associating the verified PartnerID with an app registration. When the publisher of an app has been verified, a blue *verified* badge appears in the Azure Active Directory (Azure AD) consent prompt for the app and on other webpages: Publisher verification for an app has the following benefits: App developers must meet a few requirements to complete the publisher verification process. Many Microsoft partners will have already satisfied these requirements. -- The developer must have an MPN ID for a valid [Microsoft Partner Network](https://partner.microsoft.com/membership) account that has completed the [verification](/partner-center/verification-responses) process. The MPN account must be the [partner global account (PGA)](/partner-center/account-structure#the-top-level-is-the-partner-global-account-pga) for the developer's organization.+- The developer must have an MPN ID for a valid [Microsoft Cloud Partner Program](https://partner.microsoft.com/membership) account that has completed the [verification](/partner-center/verification-responses) process. The MPN account must be the [partner global account (PGA)](/partner-center/account-structure#the-top-level-is-the-partner-global-account-pga) for the developer's organization. > [!NOTE] > The MPN account you use for publisher verification can't be your partner location MPN ID. Currently, location MPN IDs aren't supported for the publisher verification process. Publisher verification currently isn't supported in national clouds. Apps that a Review frequently asked questions about the publisher verification program. For common questions about requirements and the process, see [Mark an app as publisher verified](mark-app-as-publisher-verified.md). -- **What does publisher verification *not* tell me about the app or its publisher?** The blue *verified* badge doesn't imply or indicate quality criteria you might look for in an app. For example, you might want to know whether the app or its publisher have specific certifications, comply with industry standards, or adhere to best practices. Publisher verification doesn't give you this information. Other Microsoft programs, like [Microsoft 365 App Certification](/microsoft-365-app-certification/overview), do provide this information.+- **What does publisher verification *not* tell me about the app or its publisher?** The blue *verified* badge doesn't imply or indicate quality criteria you might look for in an app. For example, you might want to know whether the app or its publisher have specific certifications, comply with industry standards, or adhere to best practices. Publisher verification doesn't give you this information. Other Microsoft programs, like [Microsoft 365 App Certification](/microsoft-365-app-certification/overview), do provide this information. Verified publisher status is only one of the several criteria to consider while evaluating the security and [OAuth consent requests](../manage-apps/manage-consent-requests.md) of an application. - **How much does publisher verification cost for the app developer? Does it require a license?** Microsoft doesn't charge developers for publisher verification. No license is required to become a verified publisher. |
active-directory | Service Accounts Govern On Premises | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-govern-on-premises.md | Title: Govern on-premises service accounts | Azure Active Directory -description: Use this guide to create and run an account lifecycle process for service accounts. + Title: Govern on-premises service accounts +description: Learn to create and run an account lifecycle process for on-premises service accounts -+ Previously updated : 08/19/2022 Last updated : 02/07/2023 -* [Group managed service accounts (gMSAs)](service-accounts-group-managed.md) -* [Standalone managed service accounts (sMSAs)](service-accounts-standalone-managed.md) -* [Computer accounts](service-accounts-computer.md) -* [User accounts that function as service accounts](service-accounts-user-on-premises.md) +* Group-managed service accounts (gMSAs) + * [Secure group managed service accounts](service-accounts-group-managed.md) +* Standalone managed service accounts (sMSAs) + * [Secure standalone managed service accounts](service-accounts-standalone-managed.md) +* On-premises computer accounts + * [Secure on-premises computer accounts with Active Directory](service-accounts-computer.md) +* User accounts functioning as service accounts + * [Secure user-based service accounts in Active Directory](service-accounts-user-on-premises.md) +Part of service account governance includes: -It is critical to govern service accounts closely so that you can: +* Protecting them, based on requirements and purpose +* Managing account lifecycle, and their credentials +* Assessing service accounts, based on risk and permissions +* Ensuring Active Directory (AD) and Azure Active Directory (Azure AD) have no unused service accounts, with permissions -* Protect them based on their use-case requirements and purpose. -* Manage the lifecycle of the accounts and their credentials. -* Assess them based on the risk they'll be exposed to and the permissions they carry. -* Ensure that Active Directory and Azure Active Directory have no stale service accounts with potentially far-reaching permissions. +## New service account principles -## Principles for creating a new service account --When you create a service account, understand the considerations listed in the following table: +When you create service accounts, consider the information in the following table. | Principle| Consideration | | - |- | -| Service account mapping| Tie the service account to a single service, application, or script. | -| Ownership| Ensure that there's an owner who requests and assumes responsibility for the account. | -| Scope| Define the scope clearly, and anticipate usage duration for the service account. | -| Purpose| Create service accounts for a single, specific purpose. | -| Permissions | Apply the principle of *least permission*. To do so:<li>Never assign permissions to built-in groups, such as administrators.<li>Remove local machine permissions, where appropriate.<li>Tailor access, and use Active Directory delegation for directory access.<li>Use granular access permissions.<li>Set account expirations and location-based restrictions on user-based service accounts. | -| Monitor and audit use| Monitor sign-in data, and ensure that it matches the intended usage. Set alerts for anomalous usage. | -| | | --### Set restrictions for user accounts +| Service account mapping| Connect the service account to a service, application, or script | +| Ownership| Ensure there's an account owner who requests and assumes responsibility | +| Scope| Define the scope, and anticipate usage duration| +| Purpose| Create service accounts for one purpose | +| Permissions | Apply the principle of least permission:<li>Don't assign permissions to built-in groups, such as administrators<li>Remove local machine permissions, where feasible<li>Tailor access, and use AD delegation for directory access<li>Use granular access permissions<li>Set account expiration and location restrictions on user-based service accounts | +| Monitor and audit use| <li>Monitor sign-in data, and ensure it matches the intended usage <li>Set alerts for anomalous usage | -For user accounts that are used as service accounts, apply the following settings: +### User account restrictions -* [**Account expiration**](/powershell/module/activedirectory/set-adaccountexpiration?view=winserver2012-ps&preserve-view=true): Set the service account to automatically expire at a set time after its review period, unless you've determined that the account should continue. +For user accounts used as service accounts, apply the following settings: -* **LogonWorkstations**: Restrict permissions where the service account can sign in. If it runs locally on a machine and accesses only resources on that machine, restrict it from signing in anywhere else. --* [**Cannot change password**](/powershell/module/activedirectory/set-aduser): Prevent the service account from changing its own password by setting the parameter to true. +* Account expiration - set the service account to automatically expire, after its review period, unless the account can continue +* LogonWorkstations - restrict service account sign-in permissions + * If it runs locally and accesses resources on the machine, restrict it from signing in elsewhere +* Can't change password - set the parameter to **true** to prevent the service account from changing its own password -## Build a lifecycle management process --To help maintain the security of your service accounts, you must manage them from the time you identify the need until they're decommissioned. +## Lifecycle management process -For lifecycle management of service accounts, use the following process: +To help maintain service account security, manage them from inception to decommission. Use the following process: -1. Collect usage information for the account. -1. Move the service account and app to the configuration management database (CMDB). -1. Perform risk assessment or a formal review. -1. Create the service account and apply restrictions. -1. Schedule and perform recurring reviews. Adjust permissions and scopes as necessary. -1. Deprovision the account when appropriate. +1. Collect account usage information. +2. Move the service account and app to the configuration management database (CMDB). +3. Perform risk assessment or a formal review. +4. Create the service account and apply restrictions. +5. Schedule and perform recurring reviews. +6. Adjust permissions and scopes as needed. +7. Deprovision the account. -### Collect usage information for the service account +### Collect service account usage information -Collect relevant business information for each service account. The following table lists the minimum amount of information to collect, but you should collect everything that's necessary to make the business case for each account's existence. +Collect relevant information for each service account. The following table lists the minimum information to collect. Obtain what's needed to validate each account. | Data| Description | | - | - |-| Owner| The user or group that's accountable for the service account | +| Owner| The user or group accountable for the service account | | Purpose| The purpose of the service account |-| Permissions (scopes)| The expected set of permissions | -| CMDB links| The cross-link service account with the target script or application and owners | -| Risk| The risk and business impact scoring, based on the security risk assessment | -| Lifetime| The anticipated maximum lifetime for enabling the scheduling of account expiration or recertification | -| | | +| Permissions (scopes)| The expected permissions | +| CMDB links| The cross-link service account with the target script or application, and owners | +| Risk| The results of a security risk assessment | +| Lifetime| The anticipated maximum lifetime to schedule account expiration or recertification | -Ideally, you want to make the request for an account self-service, and require the relevant information. The owner can be an application or business owner, an IT member, or an infrastructure owner. By using a tool such as Microsoft Forms for this request and associated information, you'll make it easier to port it to your CMDB inventory tool if the account is approved. +Make the account request self-service, and require the relevant information. The owner is an application or business owner, an IT team member, or an infrastructure owner. You can use Microsoft Forms for requests and associated information. If the account is approved, use Microsoft Forms to port it to a configuration management databases (CMDB) inventory tool. -### Onboard service account to CMDB +### Service accounts and CMDB -Store the collected information in a CMDB-type application. In addition to the business information, include all dependencies on other infrastructure, apps, and processes. This central repository makes it easier to: +Store the collected information in a CMDB application. Include dependencies on infrastructure, apps, and processes. Use this central repository to: -* Assess risk. -* Configure the service account with the required restrictions. -* Understand any relevant functional and security dependencies. -* Conduct regular reviews for security and continued need. -* Contact the owners for reviewing, retiring, and changing the service account. +* Assess risk +* Configure the service account with restrictions +* Ascertain functional and security dependencies +* Conduct regular reviews for security and continued need +* Contact the owner to review, retire, and change the service account -Consider a service account that's used to run a website and has permissions to connect to one or more Human Resources (HR) SQL databases. The information stored in your CMDB for the service account, including example descriptions, is listed in the following table: +#### Example HR scenario + +An example is a service account that runs a website with permissions to connect to Human Resources SQL databases. The information in the service account CMDB, including examples, is in the following table: -|Data | Example description| +|Data | Example| | - | - |-| Owner, Deputy| John Bloom, Anna Mayers | -| Purpose| Run the HR webpage and connect to HR databases. Can impersonate end users when accessing databases. | -| Permissions, scopes| HR-WEBServer: sign in locally; run web page<br>HR-SQL1: sign in locally; read permissions on all HR databases<br>HR-SQL2: sign in locally; read permissions on Salary database only | -| Cost Center| 883944 | -| Risk Assessed| Medium; Business Impact: Medium; private information; Medium | -| Account Restrictions| Log on to: only aforementioned servers; Cannot change password; MBI-Password Policy; | +| Owner, Deputy| Name, Name | +| Purpose| Run the HR webpage and connect to HR databases. Impersonate end users when accessing databases. | +| Permissions, scopes| HR-WEBServer: sign in locally; run web page<br>HR-SQL1: sign in locally; read permissions on HR databases<br>HR-SQL2: sign in locally; read permissions on Salary database only | +| Cost center| 123456 | +| Risk assessed| Medium; Business Impact: Medium; private information; Medium | +| Account restrictions| Sign in to: only aforementioned servers; Can't change password; MBI-Password Policy; | | Lifetime| Unrestricted |-| Review Cycle| Biannually (by owner, by security team, by privacy) | -| | | --### Perform a risk assessment or formal review of service account usage +| Review cycle| Biannually: By owner, security team, or privacy team | -Suppose your account is compromised by an unauthorized source. Assess the risks the account might pose to its associated application or service and to your infrastructure. Consider both direct and indirect risks. +### Service account risk assessments or formal reviews -* What would an unauthorized user gain direct access to? -* What other information or systems can the service account access? -* Can the account be used to grant additional permissions? -* How will you know when the permissions change? +If your account is compromised by an unauthorized source, assess the risks to associated applications, services, and infrastructure. Consider direct and indirect risks: -After you've conducted and documented the risk assessment, you might find that the risks have an impact on: +* Resources an unauthorized user can gain access to + * Other information or systems the service account can access +* Permissions the account can grant + * Indications or signals when permissions change -* Account restrictions. -* Account lifetime. -* Account review requirements (cadence and reviewers). +After the risk assessment, documentation likely shows that risks affect account: + +* Restrictions +* Lifetime +* Review requirements + * Cadence and reviewers ### Create a service account and apply account restrictions -Create a service account only after you've completed the risk assessment and documented the relevant information in your CMDB. Align the account restrictions with the risk assessment. Consider the following restrictions when they're relevant to your assessment: --* For all user accounts that you use as service accounts, define a realistic, definite end date. Set the date by using the **Account Expires** flag. For more information, see [Set-ADAccountExpiration](/powershell/module/activedirectory/set-adaccountexpiration). --* Login to the [LogonWorkstation](/powershell/module/activedirectory/set-aduser). --* [Password Policy](../../active-directory-domain-services/password-policy.md) requirements. --* Account creation in an [organizational unit location](/windows-server/identity/ad-ds/plan/delegating-administration-of-account-ous-and-resource-ous) that ensures management only for allowed users. --* Setting up and collecting auditing [that detects changes](/windows/security/threat-protection/auditing/audit-directory-service-changes) to the service account, and [service account use](https://www.manageengine.com/products/active-directory-audit/how-to/audit-kerberos-authentication-events.html). --When you're ready to put the service account into production, grant access to it more securely. --### Schedule regular reviews of service accounts --Set up regular reviews of service accounts that are classified as medium and high risk. Reviews should include: --* Owner attestation to the continued need for the account, and a justification of permissions and scopes. --* Review by privacy and security teams, including an evaluation of upstream and downstream connections. +> [!NOTE] +> Create a service account after the risk assessment, and document the findings in a CMDB. Align account restrictions with risk assessment findings. + +Consider the following restrictions, although some might not be relevant to your assessment. ++* For user accounts used as service accounts, define a realistic end date + * Use the **Account Expires** flag to set the date + * Learn more: [Set-ADAccountExpiration](/powershell/module/activedirectory/set-adaccountexpiration) +* Sign in to the [LogonWorkstation](/powershell/module/activedirectory/set-aduser) +* [Password policy](../../active-directory-domain-services/password-policy.md) requirements +* Create accounts in an [organizational unit location](/windows-server/identity/ad-ds/plan/delegating-administration-of-account-ous-and-resource-ous) that ensures only some users will manage it +* Set up and collect auditing that detects [service account changes](/windows/security/threat-protection/auditing/audit-directory-service-changes), and [service account usage](https://www.manageengine.com/products/active-directory-audit/how-to/audit-kerberos-authentication-events.html) +* Grant account access more securely before it goes into production ++### Service account reviews + +Schedule regular service account reviews, especially those classified Medium and High Risk. Reviews can include: -* Data from audits, ensuring that it's being used only for its intended purposes. +* Owner attestation of the need for the account, with justification of permissions and scopes +* Privacy and security team reviews that include upstream and downstream dependencies +* Audit data review + * Ensure the account is used for its stated purpose ### Deprovision service accounts -In your deprovisioning process, first remove permissions and monitoring, and then remove the account, if appropriate. --You deprovision service accounts when: --* The script or application that the service account was created for is retired. +Deprovision service accounts at the following junctures: -* The function within the script or application, which the service account is used for (for example, access to a specific resource), is retired. +* Retirement of the script or application for which the service account was created +* Retirement of the script or application function, for which the service account was used +* Replacement of the service account for another -* The service account has been replaced with a different service account. --After you've removed all permissions, remove the account by doing the following: --1. When the associated application or script is deprovisioned, monitor the sign-ins and resource access for the associated service accounts to be sure that they're not being used in another process. If you're sure it's no longer needed, go to next step. --1. Disable the service account to prevent sign-in, and ensure that it's no longer needed. Create a business policy for the time during which accounts should remain disabled. --1. After the remain-disabled policy is fulfilled, delete the service account. -- * **For MSAs**: [Uninstall the account](/powershell/module/activedirectory/uninstall-adserviceaccount?view=winserver2012-ps&preserve-view=true) by using PowerShell, or delete it manually from the managed service account container. +To deprovision: + +1. Remove permissions and monitoring. +2. Examine sign-ins and resource access of related service accounts to ensure no potential effect on them. +3. Prevent account sign-in. +4. Ensure the account is no longer needed (there's no complaint). +5. Create a business policy that determines the amount of time that accounts are disabled. +6. Delete the service account. - * **For computer or user accounts**: Manually delete the account from within Active Directory. + * MSAs - see, [Uninstall the account](/powershell/module/activedirectory/uninstall-adserviceaccount?view=winserver2012-ps&preserve-view=true). Use PowerShell, or delete it manually from the managed service account container. + * Computer or user accounts - manually delete the account from Active Directory ## Next steps To learn more about securing service accounts, see the following articles: -* [Introduction to on-premises service accounts](service-accounts-on-premises.md) +* [Securing on-premises service accounts](service-accounts-on-premises.md) * [Secure group managed service accounts](service-accounts-group-managed.md) * [Secure standalone managed service accounts](service-accounts-standalone-managed.md) -* [Secure computer accounts](service-accounts-computer.md) -* [Secure user accounts](service-accounts-user-on-premises.md) +* [Secure on-premises computer accounts with AD](service-accounts-computer.md) +* [Secure user-based service accounts in AD](service-accounts-user-on-premises.md) |
active-directory | Service Accounts Managed Identities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-managed-identities.md | Title: Securing managed identities in Azure Active Directory -description: Explanation of how to find, assess, and increase the security of managed identities. +description: Learn to find, assess, and increase the security of managed identities in Azure AD -+ Previously updated : 08/20/2022 Last updated : 02/07/2023 -# Securing managed identities +# Securing managed identities in Azure Active Directory -Developers are often challenged by the management of secrets and credentials used to secure communication between different services. Managed identities are secure Azure Active Directory (Azure AD) identities created to provide identities for Azure resources. +In this article, learn about managing secrets and credentials to secure communication between services. Managed identities provide an automatically managed identity in Azure Active Directory (Azure AD). Applications use managed identities to connect to resources that support Azure AD authentication, and to obtain Azure AD tokens, without credentials management. -## Benefits of using managed identities for Azure resources +## Benefits of managed identities -The following are benefits of using managed identities: +Benefits of using managed identities: -* You don't need to manage credentials. With managed identities, credentials are fully managed, rotated, and protected by Azure. Identities are automatically provided and deleted with Azure resources. Managed identities enable Azure resources to communicate with all services that support Azure AD authentication. +* With managed identities, credentials are fully managed, rotated, and protected by Azure. Identities are provided and deleted with Azure resources. Managed identities enable Azure resources to communicate with services that support Azure AD authentication. -* No one (including any Global Administrator) has access to the credentials, so they cannot be accidentally leaked by, for example, being included in code. +* No one, including the Global Administrator, has access to the credentials, which can't be accidentally leaked by being included in code. -## When to use managed identities? +## Using managed identities -Managed identities are best used for communications among services that support Azure AD authentication. +Managed identities are best for communications among services that support Azure AD authentication. A source system requests access to a target service. Any Azure resource can be a source system. For example, an Azure virtual machine (VM), Azure Function instance, and Azure App Services instances support managed identities. -A source system requests access to a target service. Any Azure resource can be a source system. For example, an Azure VM, Azure Function instance, and Azure App Services instances support managed identities. +Learn more in the video, [What can a managed identity be used for?](https://www.youtube.com/embed/5lqayO_oeEo) - > [!VIDEO https://www.youtube.com/embed/5lqayO_oeEo] +### Authentication and authorization -### How authentication and authorization work +With managed identities, the source system obtains a token from Azure AD without owner credential management. Azure manages the credentials. Tokens obtained by the source system are presented to the target system for authentication. -With managed identities the source system can obtain a token from Azure AD without the source owner having to manage credentials. Azure manages the credentials. The token obtained by the source system is presented to the target system for authentication. +The target system authenticates and authorizes the source system to allow access. If the target service supports Azure AD authentication, it accepts an access token issued by Azure AD. -The target system needs to authenticate (identify) and authorize the source system before allowing access. When the target service supports Azure AD-based authentication it accepts an access token issued by Azure AD. +Azure has a control plane and a data plane. You create resources in the control plane, and access them in the data plane. For example, you create an Azure Cosmos DB database in the control plane, but query it in the data plane. -Azure has a control plane and a data plane. In the control plane, you create resources, and in the data plane you access them. For example, you create an Azure Cosmos DB database in the control plane, but query it in the data plane. +After the target system accepts the token for authentication, it supports mechanisms for authorization for its control plane and data plane. -Once the target system accepts the token for authentication, it can support different mechanisms for authorization for its control plane and data plane. +Azure control plane operations are managed by Azure Resource Manager and use Azure role-based access control (Azure RBAC). In the data plane, target systems have authorization mechanisms. Azure Storage supports Azure RBAC on the data plane. For example, applications using Azure App Services can read data from Azure Storage, and applications using Azure Kubernetes Service can read secrets stored in Azure Key Vault. -All of AzureΓÇÖs control plane operations are managed by [Azure Resource Manager](../../azure-resource-manager/management/overview.md) and use [Azure Role Based Access Control](../../role-based-access-control/overview.md). In the data plane,, each target system has its own authorization mechanism. Azure Storage supports Azure RBAC on the data plane. For example, applications using Azure App Services can read data from Azure Storage, and applications using Azure Kubernetes Service can read secrets stored in Azure Key Vault. +Learn more: +* [What is Azure Resource Manager?](../../azure-resource-manager/management/overview.md) +* [What is Azure RBAC?](../../role-based-access-control/overview.md) +* [Azure control plane and data plane](../../azure-resource-manager/management/control-plane-and-data-plane.md) +* [Azure services that can use managed identities to access other services](../managed-identities-azure-resources/managed-identities-status.md) -For more information about control and data planes, see [Control plane and data plane operations - Azure Resource Manager](../../azure-resource-manager/management/control-plane-and-data-plane.md). +## System-assigned and user-assigned managed identities -All Azure services will eventually support managed identities. For more information, see [Services that support managed identities for Azure resources](../managed-identities-azure-resources/services-support-managed-identities.md). +There are two types of managed identities, system- and user-assigned. -## Types of managed identities +System-assigned managed identity: -There are two types of managed identitiesΓÇösystem-assigned and user-assigned. +* One-to-one relationship with the Azure resource + * For example, there's a unique managed identity associated with each VM +* Tied to the Azure resource lifecycle. When the resource is deleted, the managed identity associated with it, is automatically deleted. +* This action eliminates the risk from orphaned accounts -System-assigned managed identity has the following properties: +User-assigned managed identity -* They have 1:1 relationship with the Azure resource. For example, there's a unique managed identity associated with each VM. --* They are tied to the lifecycle of Azure resources. When the resource is deleted, the managed identity associated with it's automatically deleted, eliminating the risk associated with orphaned accounts. --User-assigned managed identities have the following properties: --* The lifecycle of these identities is independent of an Azure resource, and you must manage the lifecycle. When the Azure resource is deleted, the assigned user-assigned managed identity is not automatically deleted for you. --* A single user-assigned managed identity can be assigned to zero or more Azure resources. --* They can be created ahead of time and then assigned to a resource. +* The lifecycle is independent from an Azure resource. You manage the lifecycle. + * When the Azure resource is deleted, the assigned user-assigned managed identity isn't automatically deleted +* Assign user-assigned managed identity to zero or more Azure resources +* Create an identity ahead of time, and then assigned it to a resource later ## Find managed identity service principals in Azure AD -There are several ways in which you can find managed identities: --* Using the Enterprise Applications page in the Azure portal --* Using Microsoft Graph +To find managed identities, you can use: -### Using the Azure portal +* Enterprise applications page in the Azure portal +* Microsoft Graph -1. In Azure Active Directory, select Enterprise applications. +### The Azure portal -2. Select the filter for ΓÇ£Managed IdentitiesΓÇ¥ +1. In the Azure portal, in the left navigation, select **Azure Active Directory**. +2. In the left navigation, select **Enterprise applications**. +3. In the **Application type** column, under **Value**, select the down-arrow to select **Managed Identities**. -  +  - +### Microsoft Graph -### Using Microsoft Graph --You can get a list of all managed identities in your tenant with the following GET request to Microsoft Graph: +Use the following GET request to Microsoft Graph to get a list of managed identities in your tenant. `https://graph.microsoft.com/v1.0/servicePrincipals?$filter=(servicePrincipalType eq 'ManagedIdentity')` -You can filter these requests. For more information, see the Graph documentation for [GET servicePrincipal](/graph/api/serviceprincipal-get). +You can filter these requests. For more information, see [GET servicePrincipal](/graph/api/serviceprincipal-get?view=graph-rest-1.0&tabs=http&preserve-view=true). -## Assess the security of managed identities +## Assess managed identity security -You can assess the security of managed identities in the following ways: +To assess managed identity security: -* Examine privileges and ensure that the least privileged model is selected. Use the following PowerShell cmdlet to get the permissions assigned to your managed identities. +* Examine privileges to ensure the least-privileged model is selected + * Use the following PowerShell cmdlet to get the permissions assigned to your managed identities: `Get-AzureADServicePrincipal | % { Get-AzureADServiceAppRoleAssignment -ObjectId $_ }` - -* Ensure the managed identity is not part of any privileged groups, such as an administrators group. You can do this by enumerating the members of your highly privileged groups with PowerShell. +* Ensure the managed identity is not part of a privileged group, such as an administrators group. + * To enumerate the members of your highly privileged groups with PowerShell: `Get-AzureADGroupMember -ObjectId <String> [-All <Boolean>] [-Top <Int32>] [<CommonParameters>]` -* [Ensure you know what resources the managed identity is accessing](../../role-based-access-control/role-assignments-list-powershell.md). +* Confirm what resources the managed identity accesses + * See, [List Azure role assignments using Azure PowerShell](../../role-based-access-control/role-assignments-list-powershell.md). ## Move to managed identities -If you are using a service principal or an Azure AD user account, evaluate if you can instead use a managed identity to eliminate the need to protect, rotate, and manage credentials. +If you're using a service principal or an Azure AD user account, evaluate the use of managed identities. You can eliminate the need to protect, rotate, and manage credentials. ## Next steps -**For information on creating managed identities, see:** --[Create a user assigned managed identity](../managed-identities-azure-resources/how-to-manage-ua-identity-portal.md). --[Enable a system assigned managed identity during resource creation](../managed-identities-azure-resources/qs-configure-portal-windows-vm.md) --[Enable system assigned managed identity on an existing resource](../managed-identities-azure-resources/qs-configure-portal-windows-vm.md) --**For more information on service accounts see:** --[Introduction to Azure Active Directory service accounts](service-accounts-introduction-azure.md) --[Securing service principals](service-accounts-principal.md) +* [What are managed identities for Azure resources?](../managed-identities-azure-resources/overview.md) +* [Configure managed identities for Azure resources on a VM using the Azure portal](../managed-identities-azure-resources/qs-configure-portal-windows-vm.md) -[Governing Azure service accounts](service-accounts-governing-azure.md) +**Service accounts** -[Introduction to on-premises service accounts](service-accounts-on-premises.md) +* [Securing cloud-based service accounts](service-accounts-introduction-azure.md) +* [Securing service principals](service-accounts-principal.md) +* [Governing Azure AD service accounts](service-accounts-governing-azure.md) +* [Securing on-premises service accounts](service-accounts-on-premises.md) |
active-directory | How To Connect Install Custom | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-custom.md | ms.assetid: 6d42fb79-d9cf-48da-8445-f482c4c536af Previously updated : 01/26/2023 Last updated : 02/08/2023 When you install the synchronization services, you can leave the optional config | Use an existing SQL Server |Allows you to specify the SQL Server name and instance name. Choose this option if you already have a database server that you want to use. For **Instance Name**, enter the instance name, a comma, and the port number if your SQL Server instance doesn't have browsing enabled. Then specify the name of the Azure AD Connect database. Your SQL privileges determine whether a new database can be created or your SQL administrator must create the database in advance. If you have SQL Server administrator (SA) permissions, see [Install Azure AD Connect by using an existing database](how-to-connect-install-existing-database.md). If you have delegated permissions (DBO), see [Install Azure AD Connect by using SQL delegated administrator permissions](how-to-connect-install-sql-delegation.md). | | Use an existing service account |By default, Azure AD Connect provides a virtual service account for the synchronization services. If you use a remote instance of SQL Server or use a proxy that requires authentication, you can use a *managed service account* or a password-protected service account in the domain. In those cases, enter the account you want to use. To run the installation, you need to be an SA in SQL so you can create sign-in credentials for the service account. For more information, see [Azure AD Connect accounts and permissions](reference-connect-accounts-permissions.md#adsync-service-account). </br></br>By using the latest build, the SQL administrator can now provision the database out of band. Then the Azure AD Connect administrator can install it with database owner rights. For more information, see [Install Azure AD Connect by using SQL delegated administrator permissions](how-to-connect-install-sql-delegation.md).| | Specify custom sync groups |By default, when the synchronization services are installed, Azure AD Connect creates four groups that are local to the server. These groups are Administrators, Operators, Browse, and Password Reset. You can specify your own groups here. The groups must be local on the server. They can't be located in the domain. |-|Import synchronization settings (preview)|Allows you to import settings from other versions of Azure AD Connect. For more information, see [Importing and exporting Azure AD Connect configuration settings](how-to-connect-import-export-config.md).| +|Import synchronization settings|Allows you to import settings from other versions of Azure AD Connect. For more information, see [Importing and exporting Azure AD Connect configuration settings](how-to-connect-import-export-config.md).| ### User sign-in After installing the required components, select your users' single sign-on method. The following table briefly describes the available options. For a full description of the sign-in methods, see [User sign-in](plan-connect-user-signin.md). |
active-directory | How To Connect Staged Rollout | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-staged-rollout.md | Enable *seamless SSO* by doing the following: `Import-Module .\AzureADSSO.psd1` -4. Run PowerShell as an administrator. In PowerShell, call `New-AzureADSSOAuthenticationContext`. This command opens a pane where you can enter your tenant's Hybrid Identity Administratoristrator credentials. +4. Run PowerShell as an administrator. In PowerShell, call `New-AzureADSSOAuthenticationContext`. This command opens a pane where you can enter your tenant's Hybrid Identity Administrator credentials. 5. Call `Get-AzureADSSOStatus | ConvertFrom-Json`. This command displays a list of Active Directory forests (see the "Domains" list) on which this feature has been enabled. By default, it is set to false at the tenant level. |
active-directory | Reference Connect Accounts Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-accounts-permissions.md | Title: 'Azure AD Connect: Accounts and permissions | Microsoft Docs' -description: This topic describes the accounts used and created and permissions required. + Title: 'Azure AD Connect: Accounts and permissions' +description: Learn about accounts that are used and created and the permissions that are required to install and use Azure AD Connect. na-## Accounts used for Azure AD Connect +Learn about accounts that are used and created and the permissions that are required to install and use Azure AD Connect. - -Azure AD Connect uses 3 accounts in order to synchronize information from on-premises or Windows Server Active Directory to Azure Active Directory. These accounts are: +## Accounts used for Azure AD Connect -- **AD DS Connector account**: used to read/write information to Windows Server Active Directory+Azure AD Connect uses three accounts to *synchronize information* from on-premises Windows Server Active Directory (Windows Server AD) to Azure Active Directory (Azure AD): -- **ADSync service account**: used to run the synchronization service and access the SQL database+- **AD DS Connector account**: Used to read and write information to Windows Server AD by using Active Directory Domain Services (AD DS). -- **Azure AD Connector account**: used to write information to Azure AD+- **ADSync service account**: Used to run the sync service and access the SQL Server database. -In addition to these three accounts used to run Azure AD Connect, you will also need the following additional accounts to install Azure AD Connect. These are: +- **Azure AD Connector account**: Used to write information to Azure AD. -- **Local Administrator account**: The administrator who is installing Azure AD Connect and who has local Administrator permissions on the machine.+You also need the following accounts to *install* Azure AD Connect: -- **AD DS Enterprise Administrator account**: Optionally used to create the ΓÇ£AD DS Connector accountΓÇ¥ above.+- **Local Administrator account**: The administrator who is installing Azure AD Connect and who has local Administrator permissions on the computer. -- **Azure AD Global Administrator account**: used to create the Azure AD Connector account and configure Azure AD. You can view Hybrid Identity Administrator accounts in the Azure portal. See [List Azure AD role assignments](../../active-directory/roles/view-assignments.md).+- **AD DS Enterprise Administrator account**: Optionally used to create the required AD DS Connector account. -- **SQL SA account (optional)**: used to create the ADSync database when using the full version of SQL Server. This SQL Server may be local or remote to the Azure AD Connect installation. This account may be the same account as the Enterprise Administrator. Provisioning the database can now be performed out of band by the SQL administrator and then installed by the Azure AD Connect administrator with database owner rights. For information on this see [Install Azure AD Connect using SQL delegated administrator permissions](how-to-connect-install-sql-delegation.md)+- **Azure AD Global Administrator account**: Used to create the Azure AD Connector account and to configure Azure AD. You can view Global Administrator and Hybrid Identity Administrator accounts in the Azure portal. See [List Azure AD role assignments](../../active-directory/roles/view-assignments.md). +- **SQL SA account (optional)**: Used to create the ADSync database when you use the full version of SQL Server. The instance of SQL Server can be local or remote to the Azure AD Connect installation. This account can be the same account as the Enterprise Administrator account. ->[!IMPORTANT] -> As of build 1.4.###.# it is no longer supported to use an enterprise admin or a domain admin account as the AD DS Connector account. If you attempt to enter an account that is an enterprise admin or domain admin when specifying **use existing account**, you will receive an error. + Provisioning the database can now be performed out-of-band by the SQL Server administrator and then installed by the Azure AD Connect administrator if the account has database owner (DBO) permissions. For more information, see [Install Azure AD Connect by using SQL delegated administrator permissions](how-to-connect-install-sql-delegation.md). -> [!NOTE] -> It is supported to manage the administrative accounts used in Azure AD Connect from an ESAE Administrative Forest (also know as "Red forest"). -> Dedicated administrative forests allow organizations to host administrative accounts, workstations, and groups in an environment that has stronger security controls than the production environment. -> To learn more about dedicated administrative forests please refer to [ESAE Administrative Forest Design Approach](/windows-server/identity/securing-privileged-access/securing-privileged-access-reference-material#esae-administrative-forest-design-approach). +> [!IMPORTANT] +> Beginning in build 1.4.###.#, you no longer can use an Enterprise Administrator account or a Domain Administrator account as the AD DS Connector account. If you attempt to enter an account that is an Enterprise Administrator or Domain Administrator for **Use existing account**, the wizard displays an error message and you can't proceed. > [!NOTE]-> The Global Administrator role is not required after the initial setup and the only required account will be the **Directory Synchronization Accounts** role account. That does not necessarily mean that you will want to just remove the account with the Global Administrator role. It is better to change the role to a less powerful role, as totally removing the account may introduce issues if you ever need to re-run the wizard again. By reducing the privilege of the role you can always re-elevate the privileges if you have to utilize the Azure AD Connect wizard again. +> You can manage the administrative accounts that are used in Azure AD Connect by using an *enterprise access model*. An organization can use an enterprise access model to host administrative accounts, workstations, and groups in an environment that has stronger security controls than a production environment. For more information, see [Enterprise access model](/windows-server/identity/securing-privileged-access/securing-privileged-access-reference-material#esae-administrative-forest-design-approach). +> +> The Global Administrator role isn't required after initial setup. After setup, the only required account is the Directory Synchronization Accounts role account. Instead of removing the account that has the Global Administrator role, we recommend that you change the role to a role that has a lower level of permissions. Completely removing the account might introduce issues if you ever need to run the wizard again. You can add permissions if you need to use the Azure AD Connect wizard again. -## Installing Azure AD Connect -The Azure AD Connect installation wizard offers two different paths: +## Azure AD Connect installation -* In Express Settings, the wizard requires more privileges. This is so that it can set up your configuration easily, without requiring you to create users or configure permissions. -* In Custom Settings, the wizard offers you more choices and options. However, there are some situations in which you need to ensure you have the correct permissions yourself. +The Azure AD Connect installation wizard offers two paths: +- **Express settings**: In Azure AD Connect express settings, the wizard requires more permissions so that it can easily configure your installation. The wizard creates users and sets up permissions so that you don't have to. +- **Custom settings**: In Azure AD Connect custom settings, you have more choices and options in the wizard. However, for some scenarios, it's important to ensure that you have the correct permissions yourself. +<a name="express-settings-installation"></a> -## Express settings installation -In Express settings, the installation wizard asks for the following: +## Express settings - - AD DS Enterprise Administrator credentials - - Azure AD Global Administrator credentials +In express settings, you enter this information in the installation wizard: -### AD DS Enterprise Admin credentials -The AD DS Enterprise Admin account is used to configure your on-premises Active Directory. These credentials are only used during the installation and are not used after the installation has completed. The Enterprise Admin, not the Domain Admin should make sure the permissions in Active Directory can be set in all domains. +- AD DS Enterprise Administrator credentials +- Azure AD Global Administrator credentials -If you are upgrading from DirSync, the AD DS Enterprise Admins credentials are used to reset the password for the account used by DirSync. You also need Azure AD Global Administrator credentials. +### AD DS Enterprise Administrator credentials ++The AD DS Enterprise Administrator account is used to configure Windows Server AD. These credentials are used only during installation. The Enterprise Administrator, not the Domain Administrator, should make sure that the permissions in Windows Server AD can be set in all domains. ++If you're upgrading from DirSync, the AD DS Enterprise Administrator credentials are used to reset the password for the account that DirSync used. Azure AD Global Administrator credentials also are required. ### Azure AD Global Administrator credentials-These credentials are only used during the installation and are not used after the installation has completed. It is used to create the Azure AD Connector account used for synchronizing changes to Azure AD. The account also enables sync as a feature in Azure AD. -For more information on Global Administrator accounts, see [Global Administrator](../../active-directory/roles/permissions-reference.md#global-administrator). +Credentials for the Azure AD Global Administrator account are used only during installation. The account is used to create the Azure AD Connector account that syncs changes to Azure AD. The account also enables sync as a feature in Azure AD. ++For more information, see [Global Administrator](../../active-directory/roles/permissions-reference.md#global-administrator). ### AD DS Connector account required permissions for express settings-The AD DS Connector account is created for reading and writing to Windows Server AD and has the following permissions when created by express settings: ++The AD DS Connector account is created to read and write to Windows Server AD. The account has the following permissions when it's created during express settings installation: | Permission | Used for | | | |-| <li>Replicate Directory Changes</li><li>Replicate Directory Changes All |Password hash sync | +| - Replicate Directory Changes<br />- Replicate Directory Changes All |Password hash sync | | Read/Write all properties User |Import and Exchange hybrid | | Read/Write all properties iNetOrgPerson |Import and Exchange hybrid | | Read/Write all properties Group |Import and Exchange hybrid | | Read/Write all properties Contact |Import and Exchange hybrid | | Reset password |Preparation for enabling password writeback | -### Express installation wizard summary +### Express settings wizard ++In an express settings installation, the wizard creates some accounts and settings for you. - -The following is a summary of the express installation wizard pages, the credentials collected, and what they are used for. +The following table is a summary of the express settings wizard pages, the credentials that are collected, and what they're used for: -| Wizard Page | Credentials Collected | Permissions Required | Used For | +| Wizard page | Credentials collected | Permissions required | Purpose | | | | | |-| N/A |User running the installation wizard |Administrator of the local server |<li>Creates the ADSync service account that is used as to run the synchronization service. | -| Connect to Azure AD |Azure AD directory credentials |Global administrator role in Azure AD |<li>Enabling sync in the Azure AD directory.</li> <li>Creation of the Azure AD Connector account that is used for on-going sync operations in Azure AD.</li> | -| Connect to AD DS |On-premises Active Directory credentials |Member of the Enterprise Admins (EA) group in Active Directory |<li>Creates the AD DS Connector account in Active Directory and grants permissions to it. This created account is used to read and write directory information during synchronization.</li> | +| N/A |The user that's running the installation wizard. |Administrator of the local server. |Used to create the ADSync service account that's used to run the sync service. | +| Connect to Azure AD |Azure AD directory credentials. |Global Administrator role in Azure AD. |- Used to enable sync in the Azure AD directory.<br /> - Used to create the Azure AD Connector account that's used for ongoing sync operations in Azure AD. | +| Connect to AD DS |Windows Server AD credentials. |Member of the Enterprise Admins group in Windows Server AD. |Used to create the AD DS Connector account in Windows Server AD and grant permissions to it. This created account is used to read and write directory information during sync. | +<a name="custom-installation-settings"></a> -## Custom installation settings +## Custom settings -With the custom settings installation, the wizard offers you more choices and options. +In a custom settings installation, you have more choices and options in the wizard. -### Custom installation wizard summary -The following is a summary of the custom installation wizard pages, the credentials collected, and what they are used for. +### Custom settings wizard - +The following table is a summary of the custom settings wizard pages, the credentials collected, and what they're used for: -| Wizard Page | Credentials Collected | Permissions Required | Used For | +| Wizard page | Credentials collected | Permissions required | Purpose | | | | | |-| N/A |User running the installation wizard |<li>Administrator of the local server</li><li>If using a full SQL Server, the user must be System Administrator (SA) in SQL</li> |By default, creates the local account that is used as the sync engine service account. The account is only created when the admin does not specify a particular account. | -| Install synchronization services, Service account option |AD or local user account credentials |User, permissions are granted by the installation wizard |If the admin specifies an account, this account is used as the service account for the sync service. | -| Connect to Azure AD |Azure AD directory credentials |Global administrator role in Azure AD |<li>Enabling sync in the Azure AD directory.</li> <li>Creation of the Azure AD Connector account that is used for on-going sync operations in Azure AD.</li> | -| Connect your directories |On-premises Active Directory credentials for each forest that is connected to Azure AD |The permissions depend on which features you enable and can be found in Create the AD DS Connector account |This account is used to read and write directory information during synchronization. | -| AD FS Servers |For each server in the list, the wizard collects credentials when the sign-in credentials of the user running the wizard are insufficient to connect |Domain Administrator |Installation and configuration of the AD FS server role. | -| Web application proxy servers |For each server in the list, the wizard collects credentials when the sign-in credentials of the user running the wizard are insufficient to connect |Local admin on the target machine |Installation and configuration of WAP server role. | -| Proxy trust credentials |Federation service trust credentials (the credentials the proxy uses to enroll for a trust certificate from the FS |Domain account that is a local administrator of the AD FS server |Initial enrollment of FS-WAP trust certificate. | -| AD FS Service Account page, "Use a domain user account option" |AD user account credentials |Domain user |The Azure AD user account whose credentials are provided is used as the sign-in account of the AD FS service. | +| N/A |The user that's running the installation wizard. |- Administrator of the local server.<br />- If using an instance of full SQL Server, the user must be System Administrator (sysadmin) in SQL Server.</li> |By default, used to create the local account that's used as the sync engine service account. The account is created only when the admin doesn't specify an account. | +| Install synchronization services, service account option |The Windows Server AD or local user account credentials. |User and permissions are granted by the installation wizard. |If the admin specifies an account, this account is used as the service account for the sync service. | +| Connect to Azure AD |Azure AD directory credentials. |Global Administrator role in Azure AD. |- Used to enable sync in the Azure AD directory.<br />- Used to create the Azure AD Connector account that's used for ongoing sync operations in Azure AD. | +| Connect your directories |Windows Server AD credentials for each forest that is connected to Azure AD. |The permissions depend on which features you enable and can be found in [Create the AD DS Connector account](#create-the-ad-ds-connector-account). |This account is used to read and write directory information during sync. | +| AD FS Servers |For each server in the list, the wizard collects credentials when the sign-in credentials of the user running the wizard are insufficient to connect. |The Domain Administrator account. |Used during installation and configuration of the Active Directory Federation Services (AD FS) server role. | +| Web application proxy servers |For each server in the list, the wizard collects credentials when the sign-in credentials of the user running the wizard are insufficient to connect. |Local admin on the target machine. |Used during installation and configuration of the web application proxy (WAP) server role. | +| Proxy trust credentials |Federation service trust credentials (the credentials the proxy uses to enroll for a trust certificate from the federation services (FS)). |The domain account that's a Local Administrator of the AD FS server. |Initial enrollment of the FS-WAP trust certificate. | +| AD FS Service Account page **Use a domain user account option** |The Windows Server AD user account credentials. |A domain user. |The Azure AD user account whose credentials are provided is used as the sign-in account of the AD FS service. | ### Create the AD DS Connector account ->[!IMPORTANT] ->A new PowerShell Module named ADSyncConfig.psm1 was introduced with build **1.1.880.0** (released in August 2018) that includes a collection of cmdlets to help you configure the correct Active Directory permissions for the Azure AD DS Connector account. +> [!IMPORTANT] +> A new PowerShell Module named *ADSyncConfig.psm1* was introduced with build 1.1.880.0 (released in August 2018). The module includes a collection of cmdlets that help you configure the correct Windows Server AD permissions for the Azure AD DS Connector account. >->For more information see [Azure AD Connect: Configure AD DS Connector Account Permission](how-to-connect-configure-ad-ds-connector-account.md) +> For more information, see [Azure AD Connect: Configure AD DS Connector account permission](how-to-connect-configure-ad-ds-connector-account.md). -The account you specify on the **Connect your directories** page must be present in Active Directory prior to installation. Azure AD Connect version 1.1.524.0 and later has the option to let the Azure AD Connect wizard create the **AD DS Connector account** used to connect to Active Directory. +The account you specify on the **Connect your directories** page must be created in Windows Server AD before installation. Azure AD Connect version 1.1.524.0 and later has the option to let the Azure AD Connect wizard create the AD DS Connector account that's used to connect to Windows Server AD. -It must also have the required permissions granted. The installation wizard does not verify the permissions and any issues are only found during synchronization. +The account you specify also must have the required permissions. The installation wizard doesn't verify the permissions, and any issues are found only during the sync process. -Which permissions you require depends on the optional features you enable. If you have multiple domains, the permissions must be granted for all domains in the forest. If you do not enable any of these features, the default **Domain User** permissions are sufficient. +Which permissions you require depends on the optional features you enable. If you have multiple domains, the permissions must be granted for all domains in the forest. If you don't enable any of these features, the default Domain User permissions are sufficient. | Feature | Permissions | | | |-| ms-DS-ConsistencyGuid feature |Write permissions to the ms-DS-ConsistencyGuid attribute documented in [Design Concepts - Using ms-DS-ConsistencyGuid as sourceAnchor](plan-connect-design-concepts.md#using-ms-ds-consistencyguid-as-sourceanchor). | -| Password hash sync |<li>Replicate Directory Changes</li> <li>Replicate Directory Changes All | +| ms-DS-ConsistencyGuid feature |Write permissions to the `ms-DS-ConsistencyGuid` attribute documented in [Design Concepts - Using ms-DS-ConsistencyGuid as sourceAnchor](plan-connect-design-concepts.md#using-ms-ds-consistencyguid-as-sourceanchor). | +| Password hash sync |- Replicate Directory Changes<br />- Replicate Directory Changes All | | Exchange hybrid deployment |Write permissions to the attributes documented in [Exchange hybrid writeback](reference-connect-sync-attributes-synchronized.md#exchange-hybrid-writeback) for users, groups, and contacts. |-| Exchange Mail Public Folder |Read permissions to the attributes documented in [Exchange Mail Public Folder](reference-connect-sync-attributes-synchronized.md#exchange-mail-public-folder) for public folders. | +| Exchange Mail Public Folder |Read permissions to the attributes documented in [Exchange Mail Public Folder](reference-connect-sync-attributes-synchronized.md#exchange-mail-public-folder) for public folders. | | Password writeback |Write permissions to the attributes documented in [Getting started with password management](../authentication/tutorial-enable-sspr-writeback.md) for users. |-| Device writeback |Permissions granted with a PowerShell script as described in [device writeback](how-to-connect-device-writeback.md). | -| Group writeback |Allows you to writeback **Microsoft 365 Groups** to a forest with Exchange installed.| +| Device writeback |Permissions granted with a PowerShell script as described in [Device writeback](how-to-connect-device-writeback.md). | +| Group writeback |Allows you to writeback *Microsoft 365 Groups* to a forest that has Exchange installed.| -## Upgrade -When you upgrade from one version of Azure AD Connect to a new release, you need the following permissions: +<a name="upgrade"></a> ->[!IMPORTANT] ->Starting with build 1.1.484, Azure AD Connect introduced a regression bug which requires sysadmin permissions to upgrade the SQL database. This bug is corrected in build 1.1.647. If you are upgrading to this build, you will need sysadmin permissions. Dbo permissions are not sufficient. If you attempt to upgrade Azure AD Connect without having sysadmin permissions, the upgrade will fail and Azure AD Connect will no longer function correctly afterwards. Microsoft is aware of this and is working to correct this. +## Permissions required to upgrade +When you upgrade from one version of Azure AD Connect to a new release, you need the following permissions: -| Principal | Permissions required | Used for | +| Principal | Permissions required | Purpose | | | | |-| User running the installation wizard |Administrator of the local server |Update binaries. | -| User running the installation wizard |Member of ADSyncAdmins |Make changes to Sync Rules and other configuration. | -| User running the installation wizard |If you use a full SQL server: DBO (or similar) of the sync engine database |Make database level changes, such as updating tables with new columns. | +| The user that's running the installation wizard |Administrator of the local server |Used to update binaries. | +| The user that's running the installation wizard |Member of ADSyncAdmins |Used to make changes to sync rules and other configurations. | +| The user that's running the installation wizard |If you use a full instance of SQL Server: DBO (or similar) of the sync engine database |Used to make database-level changes, such as updating tables with new columns. | ++> [!IMPORTANT] +> In build 1.1.484, a regression bug was introduced in Azure AD Connect. The bug requires sysadmin permissions to upgrade the SQL Server database. The bug is corrected in build 1.1.647. To upgrade to this build, you must have sysadmin permissions. In this scenario, DBO permissions aren't sufficient. If you attempt to upgrade Azure AD Connect without sysadmin permissions, the upgrade fails and Azure AD Connect no longer functions correctly. ++## Created accounts details ++The following sections give you more information about created accounts in Azure AD Connect. -## More about the created accounts ### AD DS Connector account-If you use express settings, then an account is created in Active Directory that is used for synchronization. The created account is located in the forest root domain in the Users container and has its name prefixed with **MSOL_**. The account is created with a long complex password that does not expire. If you have a password policy in your domain, make sure long and complex passwords would be allowed for this account. - +If you use express settings, an account that's used for syncing is created in Windows Server AD. The created account is located in the forest root domain in the Users container. The account name is prefixed with *MSOL_*. The account is created with a long, complex password that doesn't expire. If you have a password policy in your domain, make sure that long and complex passwords are allowed for this account. + -If you use custom settings, then you are responsible for creating the account before you start the installation. See Create the AD DS Connector account. +If you use custom settings, you're responsible for creating the account before you start the installation. See [Create the AD DS Connector account](#create-the-ad-ds-connector-account). ### ADSync service account-The sync service can run under different accounts. It can run under a **Virtual Service Account** (VSA), a **Group Managed Service Account** (gMSA/sMSA), or a regular user account. The supported options were changed with the 2017 April release of Connect when you do a fresh installation. If you upgrade from an earlier release of Azure AD Connect, these additional options are not available. ++The sync service can run under different accounts. It can run under a *virtual service account* (VSA), a *group managed service account* (gMSA), a *standalone managed service* (sMSA), or a regular user account. The supported options were changed with the 2017 April release of Azure AD Connect when you do a fresh installation. If you upgrade from an earlier release of Azure AD Connect, these other options aren't available. | Type of account | Installation option | Description | | | | |-| [Virtual Service Account](#virtual-service-account) | Express and custom, 2017 April and later | This is the option used for all express installations, except for installations on a Domain Controller. For custom, it is the default option unless another option is used. | -| [Group Managed Service Account](#group-managed-service-account) | Custom, 2017 April and later | If you use a remote SQL server, then we recommend to use a group managed service account. | -| [User account](#user-account) | Express and custom, 2017 April and later | A user account prefixed with AAD_ is only created during installation when installed on Windows Server 2008 and when installed on a Domain Controller. | -| [User account](#user-account) | Express and custom, 2017 March and earlier | A local account prefixed with AAD_ is created during installation. When using custom installation, another account can be specified. | +| [VSA](#vsa) | Express and custom, 2017 April and later | This option is used for all express settings installations, except for installations on a domain controller. For custom settings, it's the default option. | +| [gMSA](#gmsa) | Custom, 2017 April and later | If you use a remote instance of SQL Server, we recommend that you use a gMSA. | +| [User account](#user-account) | Express and custom, 2017 April and later | A user account prefixed with *AAD_* is created during installation only when Azure AD Connect is installed on Windows Server 2008 and when it's installed on a domain controller. | +| [User account](#user-account) | Express and custom, 2017 March and earlier | A local account prefixed with *AAD_* is created during installation. In a custom installation, you can specify a different account. | -If you use Connect with a build from 2017 March or earlier, then you should not reset the password on the service account since Windows destroys the encryption keys for security reasons. You cannot change the account to any other account without reinstalling Azure AD Connect. If you upgrade to a build from 2017 April or later, then it is supported to change the password on the service account but you cannot change the account used. +If you use Azure AD Connect with a build from 2017 March or earlier, don't reset the password on the service account. Windows destroys the encryption keys for security reasons. You can't change the account to any other account without reinstalling Azure AD Connect. If you upgrade to a build from 2017 April or later, you can change the password on the service account, but you can't change the account that's used. -> [!Important] -> You can only set the service account on first installation. It is not supported to change the service account after the installation has completed. +> [!IMPORTANT] +> You can set the service account only on first installation. You can't change the service account after installation is finished. -This is a table of the default, recommended, and supported options for the sync service account. +The following table describes default, recommended, and supported options for the sync service account. Legend: -- **Bold** indicates the default option and in most cases the recommended option.-- *Italic* indicates the recommended option when it is not the default option.-- 2008 - Default option when installed on Windows Server 2008-- Non-bold - Supported option-- Local account - Local user account on the server-- Domain account - Domain user account-- sMSA - [standalone Managed Service account](../../active-directory/fundamentals/service-accounts-on-premises.md)-- gMSA - [group Managed Service account](/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview)+- **Bold**= The default option and, in most cases, the recommended option. +- *Italic* = The recommended option when it isn't the default option. +- 2008 = The default option when installed on Windows Server 2008 +- Non-bold = A supported option +- Local account = Local user account on the server +- Domain account = Domain user account +- sMSA = [standalone managed service account](../../active-directory/fundamentals/service-accounts-on-premises.md) +- gMSA = [group managed service account](/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview) -| | LocalDB</br>Express | LocalDB/LocalSQL</br>Custom | Remote SQL</br>Custom | +| | Local database<br />Express | Local database/Local SQL Server<br />Custom | Remote SQL Server<br />Custom | | | | | |-| **domain-joined machine** | **VSA**</br>Local account (2008) | **VSA**</br>Local account (2008)</br>Local account</br>Domain account</br>sMSA,gMSA | **gMSA**</br>Domain account | -| **Domain Controller** | **Domain account** | *gMSA*</br>**Domain account**</br>sMSA| *gMSA*</br>**Domain account**| +| **domain-joined machine** | **VSA**<br />Local account (2008) | **VSA**<br />Local account (2008)<br />Local account<br />Domain account<br />sMSA, gMSA | **gMSA**<br />Domain account | +| **Domain controller** | **Domain account** | *gMSA*<br />**Domain account**<br />sMSA| *gMSA*<br />**Domain account**| ++#### VSA ++A VSA is a special type of account that doesn't have a password and is managed by Windows. +++The VSA is intended to be used with scenarios in which the sync engine and SQL Server are on the same server. If you use remote SQL Server, we recommend that you use a gMSA instead of a VSA. -#### Virtual service account -A virtual service account is a special type of account that does not have a password and is managed by Windows. +The VSA feature requires Windows Server 2008 R2 or later. If you install Azure AD Connect on Windows Server 2008, the installation falls back to using a [user account](#user-account) instead of a VSA. - +#### gMSA -The VSA is intended to be used with scenarios where the sync engine and SQL are on the same server. If you use remote SQL, then we recommend to use a Group Managed Service Account instead. +If you use a remote instance of SQL Server, we recommend that you use a gMSA. For more information about how to prepare Windows Server AD for gMSA, see [Group managed service accounts overview](/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview). -This feature requires Windows Server 2008 R2 or later. If you install Azure AD Connect on Windows Server 2008, then the installation falls back to using a [user account](#user-account) instead. +To use this option, on the [Install required components](how-to-connect-install-custom.md#install-required-components) page, select **Use an existing service account**, and then select **Managed Service Account**. -#### Group managed service account -If you use a remote SQL server, then we recommend to using a **group managed service account**. For more information on how to prepare your Active Directory for Group Managed Service account, see [Group Managed Service Accounts Overview](/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview). -To use this option, on the [Install required components](how-to-connect-install-custom.md#install-required-components) page, select **Use an existing service account**, and select **Managed Service Account**. - -It is also supported to use a [standalone managed service account](../../active-directory/fundamentals/service-accounts-on-premises.md). However, these can only be used on the local machine and there is no benefit to use them over the default virtual service account. +You also can use an [sMSA](../../active-directory/fundamentals/service-accounts-on-premises.md) in this scenario. However, you can use an sMSA only on the local computer, and there's no benefit to using an sMSA instead of the default VSA. -This feature requires Windows Server 2012 or later. If you need to use an older operating system and use remote SQL, then you must use a [user account](#user-account). +The sMSA feature requires Windows Server 2012 or later. If you need to use an earlier version of an operating system and you use remote SQL Server, you must use a [user account](#user-account). #### User account-A local service account is created by the installation wizard (unless you specify the account to use in custom settings). The account is prefixed **AAD_** and used for the actual sync service to run as. If you install Azure AD Connect on a Domain Controller, the account is created in the domain. The **AAD_** service account must be located in the domain if: - - you use a remote server running SQL server - - you use a proxy that requires authentication - +A local service account is created by the installation wizard (unless you specify in custom settings the account to use). The account is prefixed with *AAD_* and is used for the actual sync service to run as. If you install Azure AD Connect on a domain controller, the account is created in the domain. The *AAD_* service account must be located in the domain if: -The account is created with a long complex password that does not expire. +- You use a remote server running SQL Server. +- You use a proxy that requires authentication. -This account is used to store the passwords for the other accounts in a secure way. These other accounts passwords are stored encrypted in the database. The private keys for the encryption keys are protected with the cryptographic services secret-key encryption using Windows Data Protection API (DPAPI). -If you use a full SQL Server, then the service account is the DBO of the created database for the sync engine. The service will not function as intended with any other permissions. A SQL login is also created. +The *AAD_* service account is created with a long, complex password that doesn't expire. -The account is also granted permissions to files, registry keys, and other objects related to the Sync Engine. +This account is used to securely store the passwords for the other accounts. The passwords are stored encrypted in the database. The private keys for the encryption keys are protected with the cryptographic services secret key encryption by using Windows Data Protection API (DPAPI). ++If you use a full instance of SQL Server, the service account is the DBO of the created database for the sync engine. The service won't function as intended with any other permissions. A SQL Server login also is created. ++The account is also granted permissions to files, registry keys, and other objects related to the sync engine. ### Azure AD Connector account-An account in Azure AD is created for the sync service's use. This account can be identified by its display name. - +An account in Azure AD is created for the sync service to use. You can identify this account by its display name. + -The name of the server the account is used on can be identified in the second part of the user name. In the picture, the server name is DC1. If you have staging servers, each server has its own account. +The name of the server the account is used on can be identified in the second part of the username. In the preceding figure, the server name is DC1. If you have staging servers, each server has its own account. -The account is created with a long complex password that does not expire. It is granted a special role **Directory Synchronization Accounts** that has only permissions to perform directory synchronization tasks. This special built-in role cannot be granted outside of the Azure AD Connect wizard. The Azure portal shows this account with the role **User**. +A server account is created with a long, complex password that doesn't expire. The account is granted a special Directory Synchronization Accounts role that has permissions to perform only directory synchronization tasks. This special built-in role can't be granted outside of the Azure AD Connect wizard. The Azure portal shows this account with the User role. -There is a limit of 20 sync service accounts in Azure AD. To get the list of existing Azure AD service accounts in your Azure AD, run the following Azure AD PowerShell cmdlet: `Get-AzureADDirectoryRole | where {$_.DisplayName -eq "Directory Synchronization Accounts"} | Get-AzureADDirectoryRoleMember` +Azure AD has a limit of 20 sync service accounts. To get the list of existing Azure AD service accounts in your Azure AD instance, run the following Azure AD PowerShell cmdlet: `Get-AzureADDirectoryRole | where {$_.DisplayName -eq "Directory Synchronization Accounts"} | Get-AzureADDirectoryRoleMember` To remove unused Azure AD service accounts, run the following Azure AD PowerShell cmdlet: `Remove-AzureADUser -ObjectId <ObjectId-of-the-account-you-wish-to-remove>` ->[!NOTE] ->Before you can use the above PowerShell commands you will need to install the [Azure Active Directory PowerShell for Graph module](/powershell/azure/active-directory/install-adv2#installing-the-azure-ad-module) and connect to your instance of Azure AD using [Connect-AzureAD](/powershell/module/azuread/connect-azuread) +> [!NOTE] +> Before you can use these PowerShell commands, you must install the [Azure Active Directory PowerShell for Graph module](/powershell/azure/active-directory/install-adv2#installing-the-azure-ad-module) and connect to your instance of Azure AD by using [Connect-AzureAD](/powershell/module/azuread/connect-azuread). ++For more information about how to manage or reset the password for the Azure AD Connect account, see [Manage the Azure AD Connect account](how-to-connect-azureadaccount.md). -For additional information on how to manage or reset the password for the Azure AD Connector account see [Manage the Azure AD Connect account](how-to-connect-azureadaccount.md) +## Related articles -## Related documentation -If you did not read the documentation on [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md), the following table provides links to related topics. +For more information about Azure AD Connect, see these articles: |Topic |Link| | | | |Download Azure AD Connect | [Download Azure AD Connect](https://go.microsoft.com/fwlink/?LinkId=615771)|-|Install using Express settings | [Express installation of Azure AD Connect](how-to-connect-install-express.md)| -|Install using Customized settings | [Custom installation of Azure AD Connect](./how-to-connect-install-custom.md)| +|Install by using express settings | [Express installation of Azure AD Connect](how-to-connect-install-express.md)| +|Install by using customized settings | [Custom installation of Azure AD Connect](./how-to-connect-install-custom.md)| |Upgrade from DirSync | [Upgrade from Azure AD sync tool (DirSync)](how-to-dirsync-upgrade-get-started.md)| |After installation | [Verify the installation and assign licenses](how-to-connect-post-installation.md)| ## Next steps-Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md). ++Learn more about [integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md). |
active-directory | Reference Connect Health User Privacy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-health-user-privacy.md | Title: 'Azure AD Connect Health and User Privacy | Microsoft Docs' -description: This document describes user privacy with Azure AD Connect Health. + Title: Azure AD Connect Health and user privacy +description: Learn about user privacy and data collection with Azure AD Connect Health. -# User privacy and Azure AD Connect Health +# User privacy and Azure AD Connect Health +This article describes Azure AD Connect Health and user privacy. For information about Azure AD Connect and user privacy, see [User privacy and Azure AD Connect](reference-connect-user-privacy.md). ->[!NOTE] ->This article deals with Azure AD Connect Health and user privacy. For information on Azure AD Connect and user privacy see the article [here](reference-connect-user-privacy.md). ## User privacy classification-Azure AD Connect Health falls into the **data processor** category of GDPR classification. As a data processor pipeline, the service provides data processing services to key partners and end consumers. Azure AD Connect Health does not generate user data and has no independent control over what personal data is collected and how it is used. Data retrieval, aggregation, analysis, and reporting in Azure AD Connect Health are based on existing on-premises data. ++Azure AD Connect Health falls into the *data processor* category of GDPR classification. As a data processor pipeline, the service provides data processing services to key partners and end consumers. Azure AD Connect Health doesn't generate user data, and it has no independent control over what personal data is collected and how it's used. Data retrieval, aggregation, analysis, and reporting in Azure AD Connect Health are based on existing on-premises data. ## Data retention policy-Azure AD Connect Health does not generate reports, perform analytics, or provide insights beyond 30 days. Therefore, Azure AD Connect Health does not store, process, or retain any data beyond 30 days. This design is compliant with the GDPR regulations, Microsoft privacy compliance regulations, and Azure AD data retention policies. -Servers with active **Health service data is not up to date** **error** alerts for over 30 consecutive days suggest that no data has reached Connect Health during that time span. These servers will be disabled and not shown in Connect Health portal. To re-enable the servers, you must uninstall and [reinstall the health agent](how-to-connect-health-agent-install.md). -Please note that this does not apply to **warnings** with the same alert type. Warnings indicate that partial data is missing from the server you are alerted for. - -## Disable data collection and monitoring in Azure AD Connect Health -Azure AD Connect Health enables you to stop data collection for each individual monitored server or for an instance of a monitored service. For example, you can stop data collection for individual ADFS (Active Directory Federation Services) servers that are monitored using Azure AD Connect Health. You can also stop data collection for the entire ADFS instance that is being monitored using Azure AD Connect Health. When you choose to do so, the corresponding servers are deleted from the Azure AD Connect Health portal, after stopping data collection. +Azure AD Connect Health doesn't generate reports, perform analytics, or provide insights beyond 30 days. Therefore, Azure AD Connect Health doesn't store, process, or retain any data beyond 30 days. This design is compliant with the GDPR regulations, Microsoft privacy compliance regulations, and Azure AD data retention policies. ++Servers that have active **Health service data is not up to date** error alerts for more than 30 consecutive days suggest that no data has reached Connect Health during that time. These servers will be disabled and not shown in the Connect Health portal. To re-enable the servers, you must uninstall and [reinstall the health agent](how-to-connect-health-agent-install.md). This doesn't apply to *warnings* for the same alert type. Warnings indicate that partial data is missing from the server you're alerted for. ++## Disable data collection and monitoring ->[!IMPORTANT] -> You need either Azure AD Global Administrator privileges or the Contributor role in Azure RBAC to delete monitored servers from Azure AD Connect Health. +You can use Azure AD Connect Health to stop data collection for a specific monitored server or for an instance of a monitored service. For example, you can stop data collection for individual Active Directory Federation Services (AD FS) servers that are monitored by using Azure AD Connect Health. You can also stop data collection for the entire AD FS instance that's being monitored by using Azure AD Connect Health. If you choose to stop data collection for a specific monitored server, the server is deleted from the Azure AD Connect Health portal after data collection is stopped. ++> [!IMPORTANT] +> To delete monitored servers from Azure AD Connect Health, you must have either Azure AD Global Administrator account permissions or the Contributor role in Azure role-based access control. >-> Removing a server or service instance from Azure AD Connect Health is not a reversible action. +> Removing a server or service instance from Azure AD Connect Health is *not* a reversible action. -### What to expect? -If you stop data collection and monitoring for an individual monitored server or an instance of a monitored service, note the following: +### What to expect -- When you delete an instance of a monitored service, the instance is removed from the Azure AD Connect Health monitoring service list in the portal. -- When you delete a monitored server or an instance of a monitored service, the Health Agent is NOT uninstalled or removed from your servers. The Health Agent is configured not to send data to Azure AD Connect Health. You need to manually uninstall the Health Agent on previously monitored servers.-- If you have not uninstalled the Health Agent before performing this step, you may see error events on the server(s) related to the Health Agent.-- All data belonging to the instance of the monitored service is deleted as per the Microsoft Azure Data Retention Policy.+If you stop data collection and monitoring for an individual monitored server or an instance of a monitored service, you can expect the following results: -### Disable data collection and monitoring for an instance of a monitored service -See [how to remove a service instance from Azure AD Connect Health](how-to-connect-health-operations.md#delete-a-service-instance-from-azure-ad-connect-health-service). +- When you delete an instance of a monitored service, the instance is removed from the Azure AD Connect Health monitoring service list in the portal. +- When you delete a monitored server or an instance of a monitored service, the health agent *isn't* uninstalled or removed from your servers. Instead, the health agent is configured to not send data to Azure AD Connect Health. You must manually uninstall the health agent on a server that previously was monitored. +- If you don't uninstall the health agent before you delete a monitored server or an instance of a monitored service, you might see error events related to the health agent on the server. +- All data that belongs to the instance of the monitored service is deleted per the Microsoft Azure Data Retention Policy. ### Disable data collection and monitoring for a monitored server-See [how to remove a server from Azure AD Connect Health](how-to-connect-health-operations.md#delete-a-server-from-the-azure-ad-connect-health-service). -### Disable data collection and monitoring for all monitored services in Azure AD Connect Health -Azure AD Connect Health also provides the option to stop data collection of **all** registered services in the tenant. We recommend careful consideration and full acknowledgement of all Hybrid Identity Administrators before taking the action. Once the process begins, Connect Health service will stop receiving, processing, and reporting any data of all your services. Existing data in Connect Health service will be retained for no more than 30 days. -If you want to stop data collection of specific server, please follow steps at deletion of specific servers. To stop tenant-wise data collection, follow the following steps to stop data collection and delete all services of the tenant. +See [How to remove a server from Azure AD Connect Health](how-to-connect-health-operations.md#delete-a-server-from-the-azure-ad-connect-health-service). ++### Disable data collection and monitoring for an instance of a monitored service ++See [How to remove a service instance from Azure AD Connect Health](how-to-connect-health-operations.md#delete-a-service-instance-from-azure-ad-connect-health-service). ++### Disable data collection and monitoring for all monitored services ++Azure AD Connect Health provides the option to stop data collection of *all* registered services in the tenant. We recommend careful consideration and full acknowledgment of all hybrid identity administrators before you take this action. After the process begins, the Azure AD Connect Health service stops receiving, processing, and reporting any data for all of your services. Existing data in Azure AD Connect Health service is retained for no more than 30 days. ++If you want to stop data collection on a specific server, complete the steps to delete a specific server. To stop data collection for a tenant, complete the following steps to stop data collection and delete all services for the tenant: ++1. In the main menu under **Configuration**, select **General Settings**. +1. In the command bar, select **Stop Data Collection**. Other options for configuring the tenant settings are disabled after the process starts. -1. Click on **General Settings** under configuration in the main blade. -2. Click on **Stop Data Collection** button on the top of the blade. The other options of tenant configuration settings will be disabled once the process starts. - -  - -3. Ensure the list of onboarded services which are affected by stopping data collections. -4. Enter the exact tenant name to enable the **Delete** action button -5. Click on **Delete** to trigger the deletion of all services. Connect Health will stop receiving, processing, reporting any data sent from your onboarded services. The entire process of can take up to 24 hours. Notice that this step is not reversible. -6. After the process is completed, you will not see any registered services in Connect Health any more. + :::image type="content" source="media/reference-connect-health-user-privacy/gdpr4.png" alt-text="Screenshot that shows the command to stop data collection in the portal."::: -  +1. Check the list of onboarded services that are affected by stopping data collections. +1. Enter the exact tenant name to enable the **Delete** button. +1. Select **Delete** to initiate the deletion of all services. Azure AD Connect Health will stop receiving, processing, and reporting any data that's sent from your onboarded services. The entire process of might take up to 24 hours. *This step isn't reversible*. ++When the process is finished, you won't see any registered services in Azure AD Connect Health. +++## Re-enable data collection and monitoring -## Re-enable data collection and monitoring in Azure AD Connect Health To re-enable monitoring in Azure AD Connect Health for a previously deleted monitored service, you must uninstall and [reinstall the health agent](how-to-connect-health-agent-install.md) on all the servers. ### Re-enable data collection and monitoring for all monitored services -Tenant-wise data collection can be resumed in Azure AD Connect Health. We recommend careful consideration and full acknowledgement of all global admins before taking the action. +For tenants, data collection can be resumed in Azure AD Connect Health. We recommend careful consideration and full acknowledgment of all global administrators before you take this action. ++> [!IMPORTANT] +> The following steps are available beginning 24 hours after a disable action. After you enable data collection, the presented insight and monitoring data in Azure AD Connect Health won't show any data that was collected before the disable action. ->[!IMPORTANT] -> The following steps will be available after 24 hours of disable action. -> After enabling of data collection, the presented insight and monitoring data in Connect Health will not show any legacy data collected before. +1. In the main menu under **Configuration**, select **General Settings**. +1. In the command bar, select **Enable Data Collection**. -1. Click on **General Settings** under configuration in the main blade. -2. Click on **Enable Data Collection** button on the top of the blade. - -  - -3. Enter the exact tenant name to activate the **Enable** button. -4. Click on **Enable** button to grant permission of data collection in Connect Health service. The change will be applied shortly. -5. Follow the [installation process](how-to-connect-health-agent-install.md) to reinstall the agent in the servers to be monitored and the services will be present in the portal. + :::image type="content" source="media/reference-connect-health-user-privacy/gdpr6.png" alt-text="Screenshot that shows the Enable Data Collection command in the portal."::: +1. Enter the exact tenant name to activate the **Enable** button. +1. Select **Enable** to grant permissions for data collection in the Azure AD Connect Health service. The change will be applied shortly. +1. Follow the [installation process](how-to-connect-health-agent-install.md) to reinstall the agent in the servers to be monitored. The services will be present in the portal. ## Next steps-* [Review the Microsoft Privacy policy on Trust Center](https://www.microsoft.com/trustcenter) -* [Azure AD Connect and User Privacy](reference-connect-user-privacy.md) +- Review the [Microsoft privacy policy in the Trust Center](https://www.microsoft.com/trustcenter). +- Learn about [Azure AD Connect and user privacy](reference-connect-user-privacy.md). |
active-directory | Tutorial Federation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tutorial-federation.md | Title: 'Tutorial: Federate a single AD forest environment to Azure | Microsoft Docs' -description: Demonstrates how to setup a hybrid identity environment using federation. + Title: 'Tutorial: Use federation for hybrid identity in a single Active Directory forest' +description: Learn how to set up a hybrid identity environment by using federation to integrate a Windows Server Active Directory forest with Azure Active Directory. -# Tutorial: Federate a single AD forest environment to the cloud +# Tutorial: Use federation for hybrid identity in a single Active Directory forest - +This tutorial shows you how to create a hybrid identity environment in Azure by using federation and Windows Server Active Directory (Windows Server AD). You can use the hybrid identity environment you create for testing or to get more familiar with how hybrid identity works. -The following tutorial will walk you through creating a hybrid identity environment using federation. This environment can then be used for testing or for getting more familiar with how a hybrid identity works. ++In this tutorial, you learn how to: ++> [!div class="checklist"] +> +> - Create a virtual machine. +> - Create a Windows Server Active Directory environment. +> - Create a Windows Server Active Directory user. +> - Create a certificate. +> - Create an Azure Active Directory tenant. +> - Create a Hybrid Identity Administrator account in Azure. +> - Add a custom domain to your directory. +> - Set up Azure AD Connect. +> - Test and verify that users are synced. ## Prerequisites-The following are prerequisites required for completing this tutorial -- A computer with [Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) installed. It's suggested to do this on either a [Windows 10](/virtualization/hyper-v-on-windows/about/supported-guest-os) or a [Windows Server 2016](/windows-server/virtualization/hyper-v/supported-windows-guest-operating-systems-for-hyper-v-on-windows) computer.-- An [Azure subscription](https://azure.microsoft.com/free)--- A copy of Windows Server 2016-- A [custom domain](../../active-directory/fundamentals/add-custom-domain.md) that can be verified++To complete the tutorial, you need these items: ++- A computer with [Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) installed. We suggest that you install Hyper-V on a [Windows 10](/virtualization/hyper-v-on-windows/about/supported-guest-os) or [Windows Server 2016](/windows-server/virtualization/hyper-v/supported-windows-guest-operating-systems-for-hyper-v-on-windows) computer. +- An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. +- An [external network adapter](/virtualization/hyper-v-on-windows/quick-start/connect-to-network), so the virtual machine can connect to the internet. +- A copy of Windows Server 2016. +- A [custom domain](../../active-directory/fundamentals/add-custom-domain.md) that can be verified. > [!NOTE]-> This tutorial uses PowerShell scripts so that you can create the tutorial environment in the quickest amount of time. Each of the scripts uses variables that are declared at the beginning of the scripts. You can and should change the variables to reflect your environment. +> This tutorial uses PowerShell scripts to quickly create the tutorial environment. Each script uses variables that are declared at the beginning of the script. Be sure to change the variables to reflect your environment. >->The scripts used create a general Active Directory environment prior to installing Azure AD Connect. They are relevant for all of the tutorials. +> The scripts in the tutorial create a general Windows Server Active Directory (Windows Server AD) environment before they install Azure AD Connect. The scripts are also used in related tutorials. >-> Copies of the PowerShell scripts that are used in this tutorial are available on GitHub [here](https://github.com/billmath/tutorial-phs). +> The PowerShell scripts that are used in this tutorial are available on [GitHub](https://github.com/billmath/tutorial-phs). ## Create a virtual machine-The first thing that we need to do, in order to get our hybrid identity environment up and running is to create a virtual machine that will be used as our on-premises Active Directory server. -->[!NOTE] ->If you have never run a script in PowerShell on your host machine you'll need to run `Set-ExecutionPolicy remotesigned` and say yes in PowerShell, prior to running scripts. --Do the following: --1. Open up the PowerShell ISE as Administrator. -2. Run the following script. --```powershell -#Declare variables -$VMName = 'DC1' -$Switch = 'External' -$InstallMedia = 'D:\ISO\en_windows_server_2016_updated_feb_2018_x64_dvd_11636692.iso' -$Path = 'D:\VM' -$VHDPath = 'D:\VM\DC1\DC1.vhdx' -$VHDSize = '64424509440' --#Create New Virtual Machine -New-VM -Name $VMName -MemoryStartupBytes 16GB -BootDevice VHD -Path $Path -NewVHDPath $VHDPath -NewVHDSizeBytes $VHDSize -Generation 2 -Switch $Switch --#Set the memory to be non-dynamic -Set-VMMemory $VMName -DynamicMemoryEnabled $false --#Add DVD Drive to Virtual Machine -Add-VMDvdDrive -VMName $VMName -ControllerNumber 0 -ControllerLocation 1 -Path $InstallMedia --#Mount Installation Media -$DVDDrive = Get-VMDvdDrive -VMName $VMName --#Configure Virtual Machine to Boot from DVD -Set-VMFirmware -VMName $VMName -FirstBootDevice $DVDDrive -``` --## Complete the operating system deployment -In order to finish building the virtual machine, you need to finish the operating system installation. --1. Hyper-V Manager, double-click on the virtual machine -2. Click on the Start button. -3. You'll be prompted to ΓÇÿPress any key to boot from CD or DVDΓÇÖ. Go ahead and do so. -4. On the Windows Server start up screen select your language and click **Next**. -5. Click **Install Now**. -6. Enter your license key and click **Next**. -7. Check **I accept the license terms** and click **Next**. -8. Select **Custom: Install Windows Only (Advanced)** -9. Click **Next** -10. Once the installation has completed, restart the virtual machine, sign-in and run Windows updates to ensure the VM is the most up-to-date. Install the latest updates. --## Install Active Directory pre-requisites -Now that we have a virtual machine up, we need to do a few things prior to installing Active Directory. That is, we need to rename the virtual machine, set a static IP address and DNS information, and install the Remote Server Administration tools. Do the following: --1. Open up the PowerShell ISE as Administrator. -2. Run `Set-ExecutionPolicy remotesigned` and say yes to all [A]. Press Enter. -3. Run the following script. --```powershell -#Declare variables -$ipaddress = "10.0.1.117" -$ipprefix = "24" -$ipgw = "10.0.1.1" -$ipdns = "10.0.1.117" -$ipdns2 = "8.8.8.8" -$ipif = (Get-NetAdapter).ifIndex -$featureLogPath = "c:\poshlog\featurelog.txt" -$newname = "DC1" -$addsTools = "RSAT-AD-Tools" --#Set static IP address -New-NetIPAddress -IPAddress $ipaddress -PrefixLength $ipprefix -InterfaceIndex $ipif -DefaultGateway $ipgw --# Set the DNS servers -Set-DnsClientServerAddress -InterfaceIndex $ipif -ServerAddresses ($ipdns, $ipdns2) --#Rename the computer -Rename-Computer -NewName $newname -force --#Install features -New-Item $featureLogPath -ItemType file -Force -Add-WindowsFeature $addsTools -Get-WindowsFeature | Where installed >>$featureLogPath --#Restart the computer -Restart-Computer -``` ++To create a hybrid identity environment, the first task is to create a virtual machine to use as an on-premises Windows Server AD server. ++> [!NOTE] +> If you've never run a script in PowerShell on your host machine, before you run any scripts, open Windows PowerShell ISE as administrator and run `Set-ExecutionPolicy remotesigned`. In the **Execution Policy Change** dialog, select **Yes**. ++To create the virtual machine: ++1. Open Windows PowerShell ISE as administrator. +1. Run the following script: ++ ```powershell + #Declare variables + $VMName = 'DC1' + $Switch = 'External' + $InstallMedia = 'D:\ISO\en_windows_server_2016_updated_feb_2018_x64_dvd_11636692.iso' + $Path = 'D:\VM' + $VHDPath = 'D:\VM\DC1\DC1.vhdx' + $VHDSize = '64424509440' + + #Create a new virtual machine + New-VM -Name $VMName -MemoryStartupBytes 16GB -BootDevice VHD -Path $Path -NewVHDPath $VHDPath -NewVHDSizeBytes $VHDSize -Generation 2 -Switch $Switch + + #Set the memory to be non-dynamic + Set-VMMemory $VMName -DynamicMemoryEnabled $false + + #Add a DVD drive to the virtual machine + Add-VMDvdDrive -VMName $VMName -ControllerNumber 0 -ControllerLocation 1 -Path $InstallMedia + + #Mount installation media + $DVDDrive = Get-VMDvdDrive -VMName $VMName + + #Configure the virtual machine to boot from the DVD + Set-VMFirmware -VMName $VMName -FirstBootDevice $DVDDrive + ``` ++## Install the operating system ++To finish creating the virtual machine, install the operating system: ++1. In Hyper-V Manager, double-click the virtual machine. +1. Select **Start**. +1. At the prompt, press any key to boot from CD or DVD. +1. In the Windows Server start window, select your language, and then select **Next**. +1. Select **Install Now**. +1. Enter your license key and select **Next**. +1. Select the **I accept the license terms** checkbox and select **Next**. +1. Select **Custom: Install Windows Only (Advanced)**. +1. Select **Next**. +1. When the installation is finished, restart the virtual machine. Sign in, and then check Windows Update. Install any updates to ensure that the VM is fully up-to-date. ++## Install Windows Server AD prerequisites ++Before you install Windows Server AD, run a script that installs prerequisites: ++1. Open Windows PowerShell ISE as administrator. +1. Run `Set-ExecutionPolicy remotesigned`. In the **Execution Policy Change** dialog, select **Yes to All**. +1. Run the following script: ++ ```powershell + #Declare variables + $ipaddress = "10.0.1.117" + $ipprefix = "24" + $ipgw = "10.0.1.1" + $ipdns = "10.0.1.117" + $ipdns2 = "8.8.8.8" + $ipif = (Get-NetAdapter).ifIndex + $featureLogPath = "c:\poshlog\featurelog.txt" + $newname = "DC1" + $addsTools = "RSAT-AD-Tools" + + #Set a static IP address + New-NetIPAddress -IPAddress $ipaddress -PrefixLength $ipprefix -InterfaceIndex $ipif -DefaultGateway $ipgw + + # Set the DNS servers + Set-DnsClientServerAddress -InterfaceIndex $ipif -ServerAddresses ($ipdns, $ipdns2) + + #Rename the computer + Rename-Computer -NewName $newname -force + + #Install features + New-Item $featureLogPath -ItemType file -Force + Add-WindowsFeature $addsTools + Get-WindowsFeature | Where installed >>$featureLogPath + + #Restart the computer + Restart-Computer + ``` ## Create a Windows Server AD environment-Now that we have the VM created and it has been renamed and has a static IP address, we can go ahead and install and configure Active Directory Domain Services. Do the following: --1. Open up the PowerShell ISE as Administrator. -2. Run the following script. --```powershell -#Declare variables -$DatabasePath = "c:\windows\NTDS" -$DomainMode = "WinThreshold" -$DomainName = "contoso.com" -$DomainNetBIOSName = "CONTOSO" -$ForestMode = "WinThreshold" -$LogPath = "c:\windows\NTDS" -$SysVolPath = "c:\windows\SYSVOL" -$featureLogPath = "c:\poshlog\featurelog.txt" -$Password = ConvertTo-SecureString "Passw0rd" -AsPlainText -Force --#Install AD DS, DNS and GPMC -start-job -Name addFeature -ScriptBlock { -Add-WindowsFeature -Name "ad-domain-services" -IncludeAllSubFeature -IncludeManagementTools -Add-WindowsFeature -Name "dns" -IncludeAllSubFeature -IncludeManagementTools -Add-WindowsFeature -Name "gpmc" -IncludeAllSubFeature -IncludeManagementTools } -Wait-Job -Name addFeature -Get-WindowsFeature | Where installed >>$featureLogPath --#Create New AD Forest -Install-ADDSForest -CreateDnsDelegation:$false -DatabasePath $DatabasePath -DomainMode $DomainMode -DomainName $DomainName -SafeModeAdministratorPassword $Password -DomainNetbiosName $DomainNetBIOSName -ForestMode $ForestMode -InstallDns:$true -LogPath $LogPath -NoRebootOnCompletion:$false -SysvolPath $SysVolPath -Force:$true -``` ++Now, install and configure Active Directory Domain Services to create the environment: ++1. Open Windows PowerShell ISE as administrator. +1. Run the following script: ++ ```powershell + #Declare variables + $DatabasePath = "c:\windows\NTDS" + $DomainMode = "WinThreshold" + $DomainName = "contoso.com" + $DomainNetBIOSName = "CONTOSO" + $ForestMode = "WinThreshold" + $LogPath = "c:\windows\NTDS" + $SysVolPath = "c:\windows\SYSVOL" + $featureLogPath = "c:\poshlog\featurelog.txt" + $Password = ConvertTo-SecureString "Passw0rd" -AsPlainText -Force + + #Install Active Directory Domain Services, DNS, and Group Policy Management Console + start-job -Name addFeature -ScriptBlock { + Add-WindowsFeature -Name "ad-domain-services" -IncludeAllSubFeature -IncludeManagementTools + Add-WindowsFeature -Name "dns" -IncludeAllSubFeature -IncludeManagementTools + Add-WindowsFeature -Name "gpmc" -IncludeAllSubFeature -IncludeManagementTools } + Wait-Job -Name addFeature + Get-WindowsFeature | Where installed >>$featureLogPath + + #Create a new Windows Server AD forest + Install-ADDSForest -CreateDnsDelegation:$false -DatabasePath $DatabasePath -DomainMode $DomainMode -DomainName $DomainName -SafeModeAdministratorPassword $Password -DomainNetbiosName $DomainNetBIOSName -ForestMode $ForestMode -InstallDns:$true -LogPath $LogPath -NoRebootOnCompletion:$false -SysvolPath $SysVolPath -Force:$true + ``` ## Create a Windows Server AD user-Now that we have our Active Directory environment, we need to a test account. This account will be created in our on-premises AD environment and then synchronized to Azure AD. Do the following: -1. Open up the PowerShell ISE as Administrator. -2. Run the following script. +Next, create a test user account. Create this account in your on-premises Active Directory environment. The account is then synced to Azure Active Directory (Azure AD). ++1. Open Windows PowerShell ISE as administrator. +1. Run the following script: ++ ```powershell + #Declare variables + $Givenname = "Allie" + $Surname = "McCray" + $Displayname = "Allie McCray" + $Name = "amccray" + $Password = "Pass1w0rd" + $Identity = "CN=ammccray,CN=Users,DC=contoso,DC=com" + $SecureString = ConvertTo-SecureString $Password -AsPlainText -Force + + #Create the user + New-ADUser -Name $Name -GivenName $Givenname -Surname $Surname -DisplayName $Displayname -AccountPassword $SecureString + + #Set the password to never expire + Set-ADUser -Identity $Identity -PasswordNeverExpires $true -ChangePasswordAtLogon $false -Enabled $true + ``` -```powershell -#Declare variables -$Givenname = "Allie" -$Surname = "McCray" -$Displayname = "Allie McCray" -$Name = "amccray" -$Password = "Pass1w0rd" -$Identity = "CN=ammccray,CN=Users,DC=contoso,DC=com" -$SecureString = ConvertTo-SecureString $Password -AsPlainText -Force +## Create a certificate for AD FS +You need a TLS or SSL certificate that Active Directory Federation Services (AD FS) will use. The certificate is a self-signed certificate, and you create it to use only for testing. We recommend that you don't use a self-signed certificate in a production environment. -#Create the user -New-ADUser -Name $Name -GivenName $Givenname -Surname $Surname -DisplayName $Displayname -AccountPassword $SecureString +To create a certificate: -#Set the password to never expire -Set-ADUser -Identity $Identity -PasswordNeverExpires $true -ChangePasswordAtLogon $false -Enabled $true -``` +1. Open Windows PowerShell ISE as administrator. +1. Run the following script: -## Create a certificate for AD FS -Now we'll create a TLS/SSL certificate that will be used by AD FS. This is will be a self-signed certificate and is only for testing purposes. Microsoft doesn't recommend using a self-signed certificate in a production environment. Do the following: + ```powershell + #Declare variables + $DNSname = "adfs.contoso.com" + $Location = "cert:\LocalMachine\My" + + #Create a certificate + New-SelfSignedCertificate -DnsName $DNSname -CertStoreLocation $Location + ``` -1. Open up the PowerShell ISE as Administrator. -2. Run the following script. +## Create an Azure AD tenant -```powershell -#Declare variables -$DNSname = "adfs.contoso.com" -$Location = "cert:\LocalMachine\My" +Now, create an Azure AD tenant, so you can sync your users in Azure: -#Create certificate -New-SelfSignedCertificate -DnsName $DNSname -CertStoreLocation $Location -``` +1. In the [Azure portal](https://portal.azure.com), sign in with the account that's associated with your Azure subscription. +1. Search for and then select **Azure Active Directory**. +1. Select **Create**. -## Create an Azure AD tenant -Now we need to create an Azure AD tenant so that we can synchronize our users to the cloud. To create a new Azure AD tenant, do the following. --1. Browse to the [Azure portal](https://portal.azure.com) and sign in with an account that has an Azure subscription. -2. Select the **plus icon (+)** and search for **Azure Active Directory**. -3. Select **Azure Active Directory** in the search results. -4. Select **Create**.</br> -</br> -5. Provide a **name for the organization** along with the **initial domain name**. Then select **Create**. This will create your directory. -6. Once this has completed, click the **here** link, to manage the directory. --## Create a Hybrid Identity Administrator in Azure AD -Now that we have an Azure AD tenant, we'll create a Hybrid Identity Administratoristrator account. This account is used to create the Azure AD Connector account during Azure AD Connect installation. The Azure AD Connector account is used to write information to Azure AD. To create the Hybrid Identity Administrator account do the following. --1. Under **Manage**, select **Users**.</br> -</br> -2. Select **All users** and then select **+ New user**. -3. Provide a name and username for this user. This will be your Hybrid Identity Administrator for the tenant. You'll also want to change the **Directory role** to **Hybrid Identity Administrator.** You can also show the temporary password. When you're done, select **Create**.</br> -</br> -4. Once this has completed, open a new web browser and sign-in to myapps.microsoft.com using the new global administrator account and the temporary password. -5. Change the password for the Hybrid Identity Administrator to something that you'll remember. --## Add the custom domain name to your directory -Now that we have a tenant and a Hybrid Identity Administrator, we need to add our custom domain so that Azure can verify it. Do the following: --1. Back in the [Azure portal](https://aad.portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) be sure to close the **All Users** blade. -2. On the left, select **Custom domain names**. -3. Select **Add custom domain**.</br> -</br> -4. On **Custom domain names**, enter the name of your custom domain in the box, and click **Add Domain**. -5. On the custom domain name screen you'll be supplied with either TXT or MX information. This information must be added to the DNS information of the domain registrar under your domain. So you need to go to your domain registrar, enter either the TXT or MX information in the DNS settings for your domain. This will allow Azure to verify your domain. This may take up to 24 hours for Azure to verify it. For more information, see the [add a custom domain](../../active-directory/fundamentals/add-custom-domain.md) documentation.</br> -</br> -6. To ensure that it's verified, click the Verify button.</br> -</br> + :::image type="content" source="media/tutorial-federation/create1.png" alt-text="Screenshot that shows how to create an Azure AD tenant."::: +1. Enter a name for the organization and an initial domain name. Then select **Create** to create your directory. +1. To manage the directory, select the **here** link. ++## Create a Hybrid Identity Administrator account in Azure AD ++The next task is to create a Hybrid Identity Administrator account. This account is used to create the Azure AD Connector account during Azure AD Connect installation. The Azure AD Connector account is used to write information to Azure AD. ++To create the Hybrid Identity Administrator account: ++1. In the left menu under **Manage**, select **Users**. ++ :::image type="content" source="media/tutorial-federation/gadmin1.png" alt-text="Screenshot that shows Users selected under Manage in the resource menu to create a Hybrid Identity Administrator in Azure AD."::: +1. Select **All users**, and then select **New user**. +1. In the **User** pane, enter a name and a username for the new user. You're creating your Hybrid Identity Administrator account for the tenant. You can show and copy the temporary password. ++ In the **Directory role** pane, select **Hybrid Identity Administrator**. Then select **Create**. ++ :::image type="content" source="media/tutorial-federation/gadmin2.png" alt-text="Screenshot that shows the Create button you select when you create a Hybrid Identity Administrator account in Azure AD."::: +1. In a new web browser window, sign in to `myapps.microsoft.com` by using the new Hybrid Identity Administrator account and the temporary password. +1. Choose a new password for the Hybrid Identity Administrator account and change the password. ++## Add a custom domain name to your directory ++Now that you have a tenant and a Hybrid Identity Administrator account, add your custom domain so that Azure can verify it. ++To add a custom domain name to a directory: ++1. In the [Azure portal](https://aad.portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview), be sure to close the **All users** pane. +1. In the left menu under **Manage**, select **Custom domain names**. +1. Select **Add custom domain**. ++ :::image type="content" source="media/tutorial-federation/custom1.png" alt-text="Screenshot that shows the Add custom domain button highlighted."::: +1. In **Custom domain names**, enter the name of your custom domain, and then select **Add domain**. +1. In **Custom domain name**, either TXT or MX information is shown. You must add this information to the DNS information of the domain registrar under your domain. Go to your domain registrar and enter either the TXT or the MX information in the DNS settings for your domain. ++ :::image type="content" source="media/tutorial-federation/custom2.png" alt-text="Screenshot that shows where you get TXT or MX information."::: + Adding this information to your domain registrar allows Azure to verify your domain. Domain verification might take up to 24 hours. ++ For more information, see the [add a custom domain](../../active-directory/fundamentals/add-custom-domain.md) documentation. +1. To ensure that the domain is verified, select **Verify**. ++ :::image type="content" source="media/tutorial-federation/custom3.png" alt-text="Screenshot that shows a success message after you select Verify."::: ## Download and install Azure AD Connect-Now it's time to download and install Azure AD Connect. Once it has been installed we'll run through the express installation. Do the following: --1. Download [Azure AD Connect](https://www.microsoft.com/download/details.aspx?id=47594) -2. Navigate to and double-click **AzureADConnect.msi**. -3. On the Welcome screen, select the box agreeing to the licensing terms and click **Continue**. -4. On the Express settings screen, click **Customize**. -5. On the Install required components screen. Click **Install**. -6. On the User Sign-in screen, select **Federation with AD FS** and click **Next**. - --1. On the Connect to Azure AD screen, enter the username and password of the Global Administrator we created above and click **Next**. -2. On the Connect your directories screen, click **Add Directory**. Then select **Create new AD account** and enter the contoso\Administrator username and password and click **OK**. -3. Click **Next**. -4. On the Azure AD sign-in configuration screen, select **Continue without matching all UPN suffixes to verified domains** and click **Next.** -5. On the Domain and OU filtering screen, click **Next**. -6. On the Uniquely identifying your users screen, click **Next**. -7. On the Filter users and devices screen, click **Next**. -8. On the Optional features screen, click **Next**. -9. On the Domain Administrator credentials page, enter the contoso\Administrator username and password and click **Next.** -10. On the AD FS farm screen, make sure **Configure a new AD FS farm** is selected. -11. Select **Use a certificate installed on the federation servers** and click **Browse**. -12. Enter DC1 in the search box and select it when it's found. Click **Ok**. -13. From the **Certificate File** drop-down, select **adfs.contoso.com** the certificate we created above. Click **Next**. - --1. On the AD FS server screen, click **Browse** and enter DC1 in the search box and select it when it's found. Click **Ok**. Click **Next**. - --1. On the Web application Proxy servers screen, click **Next**. -2. On the AD FS service account screen, enter the contoso\Administrator username and password and click **Next.** -3. On the Azure AD Domain screen, select your verified custom domain from the drop-down and click **Next**. -4. On the Ready to configure screen, click **Install**. -5. When the installation completes, click **Exit**. -6. After the installation has completed, sign out and sign in again before you use the Synchronization Service Manager or Synchronization Rule Editor. ---## Verify users are created and synchronization is occurring -We'll now verify that the users that we had in our on-premises directory have been synchronized and now exist in out Azure AD tenant. Be aware that this may take a few hours to complete. To verify users are synchronized do the following. ---1. Browse to the [Azure portal](https://portal.azure.com) and sign in with an account that has an Azure subscription. -2. On the left, select **Azure Active Directory** -3. Under **Manage**, select **Users**. -4. Verify that you see the new users in our tenant - --## Test signing in with one of our users --1. Browse to [https://myapps.microsoft.com](https://myapps.microsoft.com) -2. Sign-in with a user account that was created in our new tenant. You'll need to sign-in using the following format: (user@domain.onmicrosoft.com). Use the same password that the user uses to sign-in on-premises. -  --You have now successfully setup a hybrid identity environment that you can use to test and familiarize yourself with what Azure has to offer. --## Next Steps --- [Hardware and prerequisites](how-to-connect-install-prerequisites.md) -- [Customized settings](how-to-connect-install-custom.md)-- [Azure AD Connect and federation](how-to-connect-fed-whatis.md)++Now it's time to download and install Azure AD Connect. After it's installed, you'll use the express installation. ++1. Download [Azure AD Connect](https://www.microsoft.com/download/details.aspx?id=47594). +1. Go to *AzureADConnect.msi* and double-click to open the installation file. +1. In **Welcome**, select the checkbox to agree to the licensing terms, and then select **Continue**. +1. In **Express settings**, select **Customize**. +1. In **Install required components**, select **Install**. +1. In **User sign-in**, select **Federation with AD FS**, and then select **Next**. ++ :::image type="content" source="media/tutorial-federation/fed1.png" alt-text="Screenshot that shows where to select Federation with AD FS."::: +1. In **Connect to Azure AD**, enter the username and password of the Hybrid Identity Administrator account you created earlier, and then select **Next**. +1. In **Connect your directories**, select **Add directory**. Then select **Create new AD account** and enter the contoso\Administrator username and password. Select **OK**. +1. Select **Next**. +1. In **Azure AD sign-in configuration**, select **Continue without matching all UPN suffixes to verified domains**. Select **Next.** +1. In **Domain and OU filtering**, select **Next**. +1. In **Uniquely identifying your users**, select **Next**. +1. In **Filter users and devices**, select **Next**. +1. In **Optional features**, select **Next**. +1. In **Domain Administrator credentials**, enter the contoso\Administrator username and password, and then select **Next.** +1. In **AD FS farm**, make sure that **Configure a new AD FS farm** is selected. +1. Select **Use a certificate installed on the federation servers**, and then select **Browse**. +1. In the search box, enter **DC1** and select it in the search results. Select **OK**. +1. For **Certificate File**, select **adfs.contoso.com**, the certificate you created. Select **Next**. ++ :::image type="content" source="media/tutorial-federation/fed2.png" alt-text="Screenshot that shows where to select the certificate file you created."::: +1. In **AD FS server**, select **Browse**. In the search box, enter **DC1** and select it in the search results. Select **OK**, and then select **Next**. ++ :::image type="content" source="media/tutorial-federation/fed3.png" alt-text="Screenshot that shows where to select your AD FS server."::: +1. In **Web application proxy servers**, select **Next**. +1. In **AD FS service account**, enter the contoso\Administrator username and password, and then select **Next.** +1. In **Azure AD Domain**, select your verified custom domain, and then select **Next**. +1. In **Ready to configure**, select **Install**. +1. When the installation is finished, select **Exit**. +1. Before you use Synchronization Service Manager or Synchronization Rule Editor, sign out, and then sign in again. ++## Check for users in the portal ++Now you'll verify that the users in your on-premises Active Directory tenant have synced and are now in your Azure AD tenant. This section might take a few hours to complete. ++To verify that the users are synced: ++1. In the [Azure portal](https://portal.azure.com), sign in to the account that's associated with your Azure subscription. +1. In the portal menu, select **Azure Active Directory**. +1. In the resource menu under **Manage**, select **Users**. +1. Verify that the new users appear in your tenant. ++ :::image type="content" source="media/tutorial-federation/sync1.png" alt-text="Screenshot that shows verifying that users were synced in Azure Active Directory."::: + +## Sign in with a user account to test sync ++To test that users from your Windows Server AD tenant are synced with your Azure AD tenant, sign in as one of the users: ++1. Go to [https://myapps.microsoft.com](https://myapps.microsoft.com). +1. Sign in with a user account that was created in your new tenant. ++ For the username, use the format `user@domain.onmicrosoft.com`. Use the same password the user uses to sign in to on-premises Active Directory. ++You've successfully set up a hybrid identity environment that you can use to test and to get familiar with what Azure has to offer. ++## Next steps ++- Review [Azure AD Connect hardware and prerequisites](how-to-connect-install-prerequisites.md). +- Learn how to use [customized settings](how-to-connect-install-custom.md) in Azure AD Connect. +- Learn more about [Azure AD Connect and federation](how-to-connect-fed-whatis.md). |
active-directory | Tutorial Passthrough Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tutorial-passthrough-authentication.md | Title: 'Tutorial: Integrate a single AD forest to Azure using PTA' -description: Demonstrates how to setup a hybrid identity environment using pass-through authentication. + Title: 'Tutorial: Use pass-through authentication for hybrid identity in a single Active Directory forest' +description: Learn how to set up a hybrid identity environment by using pass-through authentication to integrate a Windows Server Active Directory forest with Azure Active Directory. -# Tutorial: Integrate a single AD forest using pass-through authentication (PTA) +# Tutorial: Use pass-through authentication for hybrid identity in a single Active Directory forest - +This tutorial shows you how to create a hybrid identity environment in Azure by using pass-through authentication and Windows Server Active Directory (Windows Server AD). You can use the hybrid identity environment you create for testing or to get more familiar with how hybrid identity works. -The following tutorial will walk you through creating a hybrid identity environment using pass-through authentication. This environment can then be used for testing or for getting more familiar with how a hybrid identity works. ++In this tutorial, you learn how to: ++> [!div class="checklist"] +> +> - Create a virtual machine. +> - Create a Windows Server Active Directory environment. +> - Create a Windows Server Active Directory user. +> - Create an Azure Active Directory tenant. +> - Create a Hybrid Identity Administrator account in Azure. +> - Add a custom domain to your directory. +> - Set up Azure AD Connect. +> - Test and verify that users are synced. ## Prerequisites-The following are prerequisites required for completing this tutorial -- A computer with [Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) installed. It's suggested to do this on either a [Windows 10](/virtualization/hyper-v-on-windows/about/supported-guest-os) or a [Windows Server 2016](/windows-server/virtualization/hyper-v/supported-windows-guest-operating-systems-for-hyper-v-on-windows) computer.-- An [Azure subscription](https://azure.microsoft.com/free)--- A copy of Windows Server 2016-- A [custom domain](../../active-directory/fundamentals/add-custom-domain.md) that can be verified++- A computer with [Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) installed. We suggest that you install Hyper-V on a [Windows 10](/virtualization/hyper-v-on-windows/about/supported-guest-os) or [Windows Server 2016](/windows-server/virtualization/hyper-v/supported-windows-guest-operating-systems-for-hyper-v-on-windows) computer. +- An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. +- An [external network adapter](/virtualization/hyper-v-on-windows/quick-start/connect-to-network), so the virtual machine can connect to the internet. +- A copy of Windows Server 2016. +- A [custom domain](../../active-directory/fundamentals/add-custom-domain.md) that can be verified. > [!NOTE]-> This tutorial uses PowerShell scripts so that you can create the tutorial environment in the quickest amount of time. Each of the scripts uses variables that are declared at the beginning of the scripts. You can and should change the variables to reflect your environment. +> This tutorial uses PowerShell scripts to quickly create the tutorial environment. Each script uses variables that are declared at the beginning of the script. Be sure to change the variables to reflect your environment. >->The scripts used create a general Active Directory environment prior to installing Azure AD Connect. They are relevant for all of the tutorials. +> The scripts in the tutorial create a general Windows Server Active Directory (Windows Server AD) environment before they install Azure AD Connect. The scripts are also used in related tutorials. >-> Copies of the PowerShell scripts that are used in this tutorial are available on GitHub [here](https://github.com/billmath/tutorial-phs). +> The PowerShell scripts that are used in this tutorial are available on [GitHub](https://github.com/billmath/tutorial-phs). ## Create a virtual machine-The first thing that we need to do, in order to get our hybrid identity environment up and running is to create a virtual machine that will be used as our on-premises Active Directory server. -->[!NOTE] ->If you have never run a script in PowerShell on your host machine you will need to run `Set-ExecutionPolicy remotesigned` and say yes in PowerShell, prior to running scripts. --Do the following: --1. Open up the PowerShell ISE as Administrator. -2. Run the following script. --```powershell -#Declare variables -$VMName = 'DC1' -$Switch = 'External' -$InstallMedia = 'D:\ISO\en_windows_server_2016_updated_feb_2018_x64_dvd_11636692.iso' -$Path = 'D:\VM' -$VHDPath = 'D:\VM\DC1\DC1.vhdx' -$VHDSize = '64424509440' --#Create New Virtual Machine -New-VM -Name $VMName -MemoryStartupBytes 16GB -BootDevice VHD -Path $Path -NewVHDPath $VHDPath -NewVHDSizeBytes $VHDSize -Generation 2 -Switch $Switch --#Set the memory to be non-dynamic -Set-VMMemory $VMName -DynamicMemoryEnabled $false --#Add DVD Drive to Virtual Machine -Add-VMDvdDrive -VMName $VMName -ControllerNumber 0 -ControllerLocation 1 -Path $InstallMedia --#Mount Installation Media -$DVDDrive = Get-VMDvdDrive -VMName $VMName --#Configure Virtual Machine to Boot from DVD -Set-VMFirmware -VMName $VMName -FirstBootDevice $DVDDrive -``` --## Complete the operating system deployment -In order to finish building the virtual machine, you need to finish the operating system installation. --1. Hyper-V Manager, double-click on the virtual machine -2. Click on the Start button. -3. You'll be prompted to ΓÇÿPress any key to boot from CD or DVDΓÇÖ. Go ahead and do so. -4. On the Windows Server start up screen select your language and click **Next**. -5. Click **Install Now**. -6. Enter your license key and click **Next**. -7. Check **I accept the license terms and click **Next**. -8. Select **Custom: Install Windows Only (Advanced)** -9. Click **Next** -10. Once the installation has completed, restart the virtual machine, sign-in and run Windows updates to ensure the VM is the most up-to-date. Install the latest updates. --## Install Active Directory prerequisites -Now that we have a virtual machine up, we need to do a few things prior to installing Active Directory. That is, we need to rename the virtual machine, set a static IP address and DNS information, and install the Remote Server Administration tools. Do the following: --1. Open up the PowerShell ISE as Administrator. -2. Run `Set-ExecutionPolicy remotesigned` and say yes to all [A]. Press Enter. -3. Run the following script. --```powershell -#Declare variables -$ipaddress = "10.0.1.117" -$ipprefix = "24" -$ipgw = "10.0.1.1" -$ipdns = "10.0.1.117" -$ipdns2 = "8.8.8.8" -$ipif = (Get-NetAdapter).ifIndex -$featureLogPath = "c:\poshlog\featurelog.txt" -$newname = "DC1" -$addsTools = "RSAT-AD-Tools" --#Set static IP address -New-NetIPAddress -IPAddress $ipaddress -PrefixLength $ipprefix -InterfaceIndex $ipif -DefaultGateway $ipgw --# Set the DNS servers -Set-DnsClientServerAddress -InterfaceIndex $ipif -ServerAddresses ($ipdns, $ipdns2) --#Rename the computer -Rename-Computer -NewName $newname -force --#Install features -New-Item $featureLogPath -ItemType file -Force -Add-WindowsFeature $addsTools -Get-WindowsFeature | Where installed >>$featureLogPath --#Restart the computer -Restart-Computer -``` ++To create a hybrid identity environment, the first task is to create a virtual machine to use as an on-premises Windows Server AD server. ++> [!NOTE] +> If you've never run a script in PowerShell on your host machine, before you run any scripts, open Windows PowerShell ISE as administrator and run `Set-ExecutionPolicy remotesigned`. In the **Execution Policy Change** dialog, select **Yes**. ++To create the virtual machine: ++1. Open Windows PowerShell ISE as administrator. +1. Run the following script: ++ ```powershell + #Declare variables + $VMName = 'DC1' + $Switch = 'External' + $InstallMedia = 'D:\ISO\en_windows_server_2016_updated_feb_2018_x64_dvd_11636692.iso' + $Path = 'D:\VM' + $VHDPath = 'D:\VM\DC1\DC1.vhdx' + $VHDSize = '64424509440' + + #Create a new virtual machine + New-VM -Name $VMName -MemoryStartupBytes 16GB -BootDevice VHD -Path $Path -NewVHDPath $VHDPath -NewVHDSizeBytes $VHDSize -Generation 2 -Switch $Switch + + #Set the memory to be non-dynamic + Set-VMMemory $VMName -DynamicMemoryEnabled $false + + #Add a DVD drive to the virtual machine + Add-VMDvdDrive -VMName $VMName -ControllerNumber 0 -ControllerLocation 1 -Path $InstallMedia + + #Mount installation media + $DVDDrive = Get-VMDvdDrive -VMName $VMName + + #Configure the virtual machine to boot from the DVD + Set-VMFirmware -VMName $VMName -FirstBootDevice $DVDDrive + ``` ++## Install the operating system ++To finish creating the virtual machine, install the operating system: ++1. In Hyper-V Manager, double-click the virtual machine. +1. Select **Start**. +1. At the prompt, press any key to boot from CD or DVD. +1. In the Windows Server start window, select your language, and then select **Next**. +1. Select **Install Now**. +1. Enter your license key and select **Next**. +1. Select the **I accept the license terms** checkbox and select **Next**. +1. Select **Custom: Install Windows Only (Advanced)**. +1. Select **Next**. +1. When the installation is finished, restart the virtual machine. Sign in, and then check Windows Update. Install any updates to ensure that the VM is fully up-to-date. ++## Install Windows Server AD prerequisites ++Before you install Windows Server AD, run a script that installs prerequisites: ++1. Open Windows PowerShell ISE as administrator. +1. Run `Set-ExecutionPolicy remotesigned`. In the **Execution Policy Change** dialog, select **Yes to All**. +1. Run the following script: ++ ```powershell + #Declare variables + $ipaddress = "10.0.1.117" + $ipprefix = "24" + $ipgw = "10.0.1.1" + $ipdns = "10.0.1.117" + $ipdns2 = "8.8.8.8" + $ipif = (Get-NetAdapter).ifIndex + $featureLogPath = "c:\poshlog\featurelog.txt" + $newname = "DC1" + $addsTools = "RSAT-AD-Tools" + + #Set a static IP address + New-NetIPAddress -IPAddress $ipaddress -PrefixLength $ipprefix -InterfaceIndex $ipif -DefaultGateway $ipgw + + # Set the DNS servers + Set-DnsClientServerAddress -InterfaceIndex $ipif -ServerAddresses ($ipdns, $ipdns2) + + #Rename the computer + Rename-Computer -NewName $newname -force + + #Install features + New-Item $featureLogPath -ItemType file -Force + Add-WindowsFeature $addsTools + Get-WindowsFeature | Where installed >>$featureLogPath + + #Restart the computer + Restart-Computer + ``` ## Create a Windows Server AD environment-Now that we have the VM created and it has been renamed and has a static IP address, we can go ahead and install and configure Active Directory Domain Services. Do the following: --1. Open up the PowerShell ISE as Administrator. -2. Run the following script. --```powershell -#Declare variables -$DatabasePath = "c:\windows\NTDS" -$DomainMode = "WinThreshold" -$DomainName = "contoso.com" -$DomaninNetBIOSName = "CONTOSO" -$ForestMode = "WinThreshold" -$LogPath = "c:\windows\NTDS" -$SysVolPath = "c:\windows\SYSVOL" -$featureLogPath = "c:\poshlog\featurelog.txt" -$Password = "Pass1w0rd" -$SecureString = ConvertTo-SecureString $Password -AsPlainText -Force --#Install AD DS, DNS and GPMC -start-job -Name addFeature -ScriptBlock { -Add-WindowsFeature -Name "ad-domain-services" -IncludeAllSubFeature -IncludeManagementTools -Add-WindowsFeature -Name "dns" -IncludeAllSubFeature -IncludeManagementTools -Add-WindowsFeature -Name "gpmc" -IncludeAllSubFeature -IncludeManagementTools } -Wait-Job -Name addFeature -Get-WindowsFeature | Where installed >>$featureLogPath --#Create New AD Forest -Install-ADDSForest -CreateDnsDelegation:$false -DatabasePath $DatabasePath -DomainMode $DomainMode -DomainName $DomainName -SafeModeAdministratorPassword $SecureString -DomainNetbiosName $DomainNetBIOSName -ForestMode $ForestMode -InstallDns:$true -LogPath $LogPath -NoRebootOnCompletion:$false -SysvolPath $SysVolPath -Force:$true -``` ++Now, install and configure Active Directory Domain Services to create the environment: ++1. Open Windows PowerShell ISE as administrator. +1. Run the following script: ++ ```powershell + #Declare variables + $DatabasePath = "c:\windows\NTDS" + $DomainMode = "WinThreshold" + $DomainName = "contoso.com" + $DomainNetBIOSName = "CONTOSO" + $ForestMode = "WinThreshold" + $LogPath = "c:\windows\NTDS" + $SysVolPath = "c:\windows\SYSVOL" + $featureLogPath = "c:\poshlog\featurelog.txt" + $Password = ConvertTo-SecureString "Passw0rd" -AsPlainText -Force + + #Install Active Directory Domain Services, DNS, and Group Policy Management Console + start-job -Name addFeature -ScriptBlock { + Add-WindowsFeature -Name "ad-domain-services" -IncludeAllSubFeature -IncludeManagementTools + Add-WindowsFeature -Name "dns" -IncludeAllSubFeature -IncludeManagementTools + Add-WindowsFeature -Name "gpmc" -IncludeAllSubFeature -IncludeManagementTools } + Wait-Job -Name addFeature + Get-WindowsFeature | Where installed >>$featureLogPath + + #Create a new Windows Server AD forest + Install-ADDSForest -CreateDnsDelegation:$false -DatabasePath $DatabasePath -DomainMode $DomainMode -DomainName $DomainName -SafeModeAdministratorPassword $Password -DomainNetbiosName $DomainNetBIOSName -ForestMode $ForestMode -InstallDns:$true -LogPath $LogPath -NoRebootOnCompletion:$false -SysvolPath $SysVolPath -Force:$true + ``` ## Create a Windows Server AD user-Now that we have our Active Directory environment, we need to a test account. This account will be created in our on-premises AD environment and then synchronized to Azure AD. Do the following: -1. Open up the PowerShell ISE as Administrator. -2. Run the following script. +Next, create a test user account. Create this account in your on-premises Active Directory environment. The account is then synced to Azure Active Directory (Azure AD). ++1. Open Windows PowerShell ISE as administrator. +1. Run the following script: ++ ```powershell + #Declare variables + $Givenname = "Allie" + $Surname = "McCray" + $Displayname = "Allie McCray" + $Name = "amccray" + $Password = "Pass1w0rd" + $Identity = "CN=ammccray,CN=Users,DC=contoso,DC=com" + $SecureString = ConvertTo-SecureString $Password -AsPlainText -Force + + #Create the user + New-ADUser -Name $Name -GivenName $Givenname -Surname $Surname -DisplayName $Displayname -AccountPassword $SecureString + + #Set the password to never expire + Set-ADUser -Identity $Identity -PasswordNeverExpires $true -ChangePasswordAtLogon $false -Enabled $true + ``` -```powershell -#Declare variables -$Givenname = "Allie" -$Surname = "McCray" -$Displayname = "Allie McCray" -$Name = "amccray" -$Password = "Pass1w0rd" -$Identity = "CN=ammccray,CN=Users,DC=contoso,DC=com" -$SecureString = ConvertTo-SecureString $Password -AsPlainText -Force +## Create an Azure AD tenant +Now, create an Azure AD tenant, so you can sync your users in Azure: -#Create the user -New-ADUser -Name $Name -GivenName $Givenname -Surname $Surname -DisplayName $Displayname -AccountPassword $SecureString +1. In the [Azure portal](https://portal.azure.com), sign in with the account that's associated with your Azure subscription. +1. Search for and then select **Azure Active Directory**. +1. Select **Create**. -#Set the password to never expire -Set-ADUser -Identity $Identity -PasswordNeverExpires $true -ChangePasswordAtLogon $false -Enabled $true -``` + :::image type="content" source="media/tutorial-federation/create1.png" alt-text="Screenshot that shows how to create an Azure AD tenant."::: +1. Enter a name for the organization and an initial domain name. Then select **Create** to create your directory. +1. To manage the directory, select the **here** link. -## Create an Azure AD tenant -Now we need to create an Azure AD tenant so that we can synchronize our users to the cloud. To create a new Azure AD tenant, do the following. +## Create a Hybrid Identity Administrator in Azure AD -1. Browse to the [Azure portal](https://portal.azure.com) and sign in with an account that has an Azure subscription. -2. Select the **plus icon (+)** and search for **Azure Active Directory**. -3. Select **Azure Active Directory** in the search results. -4. Select **Create**.</br> -</br> -5. Provide a **name for the organization** along with the **initial domain name**. Then select **Create**. This will create your directory. -6. Once this has completed, click the **here** link, to manage the directory. +The next task is to create a Hybrid Identity Administrator account. This account is used to create the Azure AD Connector account during Azure AD Connect installation. The Azure AD Connector account is used to write information to Azure AD. -## Create a Hybrid Identity Administrator in Azure AD -Now that we have an Azure AD tenant, we'll create a Hybrid Identity Administratoristrator account. This account is used to create the Azure AD Connector account during Azure AD Connect installation. The Azure AD Connector account is used to write information to Azure AD. To create the Hybrid Identity Administrator account do the following. --1. Under **Manage**, select **Users**.</br> -</br> -2. Select **All users** and then select **+ New user**. -3. Provide a name and username for this user. This will be your Global Administrator for the tenant. You'll also want to change the **Directory role** to **Hybrid Identity Administrator.** You can also show the temporary password. When you're done, select **Create**.</br> -</br> -4. Once this has completed, open a new web browser and sign-in to myapps.microsoft.com using the new Hybrid Identity Administrator account and the temporary password. -5. Change the password for the Hybrid Identity Administrator to something that you will remember. --## Add the custom domain name to your directory -Now that we have a tenant and a Hybrid Identity Administrator, we need to add our custom domain so that Azure can verify it. Do the following: --1. Back in the [Azure portal](https://aad.portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) be sure to close the **All Users** blade. -2. On the left, select **Custom domain names**. -3. Select **Add custom domain**.</br> -</br> -4. On **Custom domain names**, enter the name of your custom domain in the box, and click **Add Domain**. -5. On the custom domain name screen you will be supplied with either TXT or MX information. This information must be added to the DNS information of the domain registrar under your domain. So you need to go to your domain registrar, enter either the TXT or MX information in the DNS settings for your domain. This will allow Azure to verify your domain. This may take up to 24 hours for Azure to verify it. For more information, see the [add a custom domain](../../active-directory/fundamentals/add-custom-domain.md) documentation.</br> -</br> -6. To ensure that it's verified, click the Verify button.</br> -</br> --## Download and install Azure AD Connect -Now it's time to download and install Azure AD Connect. Once it has been installed we'll run through the express installation. Do the following: --1. Download [Azure AD Connect](https://www.microsoft.com/download/details.aspx?id=47594) -2. Navigate to and double-click **AzureADConnect.msi**. -3. On the Welcome screen, select the box agreeing to the licensing terms and click **Continue**. -4. On the Express settings screen, click **Customize**. -5. On the Install required components screen. Click **Install**. -6. On the User Sign-in screen, select **Pass-through authentication** and **Enable single sign-on** and click **Next**.</br> -</b> -7. On the Connect to Azure AD screen, enter the username and password of the Global Administrator we created above and click **Next**. -2. On the Connect your directories screen, click **Add Directory**. Then select **Create new AD account** and enter the contoso\Administrator username and password and click **OK**. -3. Click **Next**. -4. On the Azure AD sign-in configuration screen, select **Continue without matching all UPN suffixes to verified domains** and click **Next.** -5. On the Domain and OU filtering screen, click **Next**. -6. On the Uniquely identifying your users screen, click **Next**. -7. On the Filter users and devices screen, click **Next**. -8. On the Optional features screen, click **Next**. -9. On the Enable single sign-n credentials page, enter the contoso\Administrator username and password and click **Next.** -10. On the Ready to configure screen, click **Install**. -11. When the installation completes, click **Exit**. -12. After the installation has completed, sign out and sign in again before you use the Synchronization Service Manager or Synchronization Rule Editor. ---## Verify users are created and synchronization is occurring -We will now verify that the users that we had in our on-premises directory have been synchronized and now exist in out Azure AD tenant. Be aware that this may take a few hours to complete. To verify users are synchronized do the following. ---1. Browse to the [Azure portal](https://portal.azure.com) and sign in with an account that has an Azure subscription. -2. On the left, select **Azure Active Directory** -3. Under **Manage**, select **Users**. -4. Verify that you see the new users in our tenant - --## Test signing in with one of our users --1. Browse to [https://myapps.microsoft.com](https://myapps.microsoft.com) -2. Sign-in with a user account that was created in our new tenant. You'll need to sign-in using the following format: (user@domain.onmicrosoft.com). Use the same password that the user uses to sign-in on-premises. -  --You have now successfully setup a hybrid identity environment that you can use to test and familiarize yourself with what Azure has to offer. --## Next Steps ---- [Hardware and prerequisites](how-to-connect-install-prerequisites.md) -- [Customized settings](how-to-connect-install-custom.md)-- [Pass-through authentication](how-to-connect-pta.md)+To create the Hybrid Identity Administrator account: ++1. In the left menu under **Manage**, select **Users**. ++ :::image type="content" source="media/tutorial-passthrough-authentication/gadmin1.png" alt-text="Screenshot that shows Users selected under Manage in the resource menu to create a Hybrid Identity Administrator in Azure AD."::: +1. Select **All users**, and then select **New user**. ++1. In the **User** pane, enter a name and a username for the new user. You're creating your Hybrid Identity Administrator account for the tenant. You can show and copy the temporary password. ++ In the **Directory role** pane, select **Hybrid Identity Administrator**. Then select **Create**. ++ :::image type="content" source="media/tutorial-passthrough-authentication/gadmin2.png" alt-text="Screenshot that shows the Create button you select when you create a Hybrid Identity Administrator account in Azure AD."::: +1. In a new web browser window, sign in to `myapps.microsoft.com` by using the new Hybrid Identity Administrator account and the temporary password. ++1. Choose a new password for the Hybrid Identity Administrator account and change the password. ++## Add a custom domain name to your directory ++Now that you have a tenant and a Hybrid Identity Administrator account, add your custom domain so that Azure can verify it. ++To add a custom domain name to a directory: ++1. In the [Azure portal](https://aad.portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview), be sure to close the **All users** pane. +1. In the left menu under **Manage**, select **Custom domain names**. +1. Select **Add custom domain**. ++ :::image type="content" source="media/tutorial-passthrough-authentication/custom1.png" alt-text="Screenshot that shows the Add custom domain button highlighted."::: +1. In **Custom domain names**, enter the name of your custom domain, and then select **Add domain**. +1. In **Custom domain name**, either TXT or MX information is shown. You must add this information to the DNS information of the domain registrar under your domain. Go to your domain registrar and enter either the TXT or the MX information in the DNS settings for your domain. ++ :::image type="content" source="media/tutorial-passthrough-authentication/custom2.png" alt-text="Screenshot that shows where you get TXT or MX information."::: + Adding this information to your domain registrar allows Azure to verify your domain. Domain verification might take up to 24 hours. ++ For more information, see the [add a custom domain](../../active-directory/fundamentals/add-custom-domain.md) documentation. +1. To ensure that the domain is verified, select **Verify**. ++ :::image type="content" source="media/tutorial-passthrough-authentication/custom3.png" alt-text="Screenshot that shows a success message after you select Verify."::: ++### Download and install Azure AD Connect ++Now it's time to download and install Azure AD Connect. After it's installed, you'll use the express installation. ++1. Download [Azure AD Connect](https://www.microsoft.com/download/details.aspx?id=47594). +1. Go to *AzureADConnect.msi* and double-click to open the installation file. +1. In **Welcome**, select the checkbox to agree to the licensing terms, and then select **Continue**. +1. In **Express settings**, select **Customize**. +1. In **Install required components**, select **Install**. +1. In **User sign-in**, select **Pass-through authentication** and **Enable single sign-on**, and then select **Next**. ++ :::image type="content" source="media/tutorial-passthrough-authentication/pta1.png" alt-text="Screenshot that shows where to select Pass-through authentication."::: +1. In **Connect to Azure AD**, enter the username and password of the Hybrid Identity Administrator account you created earlier, and then select **Next**. +1. In **Connect your directories**, select **Add directory**. Then select **Create new AD account** and enter the contoso\Administrator username and password. Select **OK**. +1. Select **Next**. +1. In **Azure AD sign-in configuration**, select **Continue without matching all UPN suffixes to verified domains**. Select **Next.** +1. In **Domain and OU filtering**, select **Next**. +1. In **Uniquely identifying your users**, select **Next**. +1. In **Filter users and devices**, select **Next**. +1. In **Optional features**, select **Next**. +1. In **Enable single sign-in credentials**, enter the contoso\Administrator username and password, and then select **Next.** +1. In **Ready to configure**, select **Install**. +1. When the installation is finished, select **Exit**. +1. Before you use Synchronization Service Manager or Synchronization Rule Editor, sign out, and then sign in again. ++## Check for users in the portal ++Now you'll verify that the users in your on-premises Active Directory tenant have synced and are now in your Azure AD tenant. This section might take a few hours to complete. ++To verify that the users are synced: ++1. In the [Azure portal](https://portal.azure.com), sign in to the account that's associated with your Azure subscription. +1. In the portal menu, select **Azure Active Directory**. +1. In the resource menu under **Manage**, select **Users**. +1. Verify that the new users appear in your tenant. ++ :::image type="content" source="media/tutorial-passthrough-authentication/sync1.png" alt-text="Screenshot that shows verifying that users were synced in Azure Active Directory."::: + +## Sign in with a user account to test sync ++To test that users from your Windows Server AD tenant are synced with your Azure AD tenant, sign in as one of the users: ++1. Go to [https://myapps.microsoft.com](https://myapps.microsoft.com). +1. Sign in with a user account that was created in your new tenant. ++ For the username, use the format `user@domain.onmicrosoft.com`. Use the same password the user uses to sign in to on-premises Active Directory. ++You've successfully set up a hybrid identity environment that you can use to test and to get familiar with what Azure has to offer. ++## Next steps ++- Review [Azure AD Connect hardware and prerequisites](how-to-connect-install-prerequisites.md). +- Learn how to use [customized settings](how-to-connect-install-custom.md) in Azure AD Connect. +- Learn more about [pass-through authentication](how-to-connect-pta.md) with Azure AD Connect. |
active-directory | Tutorial Password Hash Sync | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tutorial-password-hash-sync.md | Title: 'Tutorial: Integrate a single AD forest to Azure using PHS' -description: Demonstrates how to setup a hybrid identity environment using password hash sync. + Title: 'Tutorial: Use password hash sync for hybrid identity in a single Active Directory forest' +description: Learn how to set up a hybrid identity environment by using password hash sync to integrate a Windows Server Active Directory forest with Azure Active Directory. -# Tutorial: Integrate a single AD forest using password hash sync (PHS) +# Tutorial: Use password hash sync for hybrid identity in a single Active Directory forest - +This tutorial shows you how to create a hybrid identity environment in Azure by using password hash sync and Windows Server Active Directory (Windows Server AD). You can use the hybrid identity environment you create for testing or to get more familiar with how hybrid identity works. -The following tutorial will walk you through creating a hybrid identity environment using password hash sync. This environment can then be used for testing or for getting more familiar with how a hybrid identity works. ++In this tutorial, you learn how to: ++> [!div class="checklist"] +> +> - Create a virtual machine. +> - Create a Windows Server Active Directory environment. +> - Create a Windows Server Active Directory user. +> - Create an Azure Active Directory tenant. +> - Create a Hybrid Identity Administrator account in Azure. +> - Set up Azure AD Connect. +> - Test and verify that users are synced. ## Prerequisites-The following are prerequisites required for completing this tutorial -- A computer with [Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) installed. It's suggested to do this on either a [Windows 10](/virtualization/hyper-v-on-windows/about/supported-guest-os) or a [Windows Server 2016](/windows-server/virtualization/hyper-v/supported-windows-guest-operating-systems-for-hyper-v-on-windows) computer.-- An [external network adapter](/virtualization/hyper-v-on-windows/quick-start/connect-to-network) to allow the virtual machine to communicate with the internet.-- An [Azure subscription](https://azure.microsoft.com/free)-- A copy of Windows Server 2016++- A computer with [Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) installed. We suggest that you install Hyper-V on a [Windows 10](/virtualization/hyper-v-on-windows/about/supported-guest-os) or [Windows Server 2016](/windows-server/virtualization/hyper-v/supported-windows-guest-operating-systems-for-hyper-v-on-windows) computer. +- An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. +- An [external network adapter](/virtualization/hyper-v-on-windows/quick-start/connect-to-network), so the virtual machine can connect to the internet. +- A copy of Windows Server 2016. > [!NOTE]-> This tutorial uses PowerShell scripts so that you can create the tutorial environment in the quickest amount of time. Each of the scripts uses variables that are declared at the beginning of the scripts. You can and should change the variables to reflect your environment. +> This tutorial uses PowerShell scripts to quickly create the tutorial environment. Each script uses variables that are declared at the beginning of the script. Be sure to change the variables to reflect your environment. >->The scripts used create a general Active Directory environment prior to installing Azure AD Connect. They are relevant for all of the tutorials. +> The scripts in the tutorial create a general Windows Server Active Directory (Windows Server AD) environment before they install Azure AD Connect. The scripts are also used in related tutorials. >-> Copies of the PowerShell scripts that are used in this tutorial are available on GitHub [here](https://github.com/billmath/tutorial-phs). +> The PowerShell scripts that are used in this tutorial are available on [GitHub](https://github.com/billmath/tutorial-phs). ## Create a virtual machine-The first thing that we need to do, in order to get our hybrid identity environment up and running is to create a virtual machine that will be used as our on-premises Active Directory server. Do the following: --1. Open up the PowerShell ISE as Administrator. -2. Run the following script. --```powershell -#Declare variables -$VMName = 'DC1' -$Switch = 'External' -$InstallMedia = 'D:\ISO\en_windows_server_2016_updated_feb_2018_x64_dvd_11636692.iso' -$Path = 'D:\VM' -$VHDPath = 'D:\VM\DC1\DC1.vhdx' -$VHDSize = '64424509440' --#Create New Virtual Machine -New-VM -Name $VMName -MemoryStartupBytes 16GB -BootDevice VHD -Path $Path -NewVHDPath $VHDPath -NewVHDSizeBytes $VHDSize -Generation 2 -Switch $Switch --#Set the memory to be non-dynamic -Set-VMMemory $VMName -DynamicMemoryEnabled $false --#Add DVD Drive to Virtual Machine -Add-VMDvdDrive -VMName $VMName -ControllerNumber 0 -ControllerLocation 1 -Path $InstallMedia --#Mount Installation Media -$DVDDrive = Get-VMDvdDrive -VMName $VMName --#Configure Virtual Machine to Boot from DVD -Set-VMFirmware -VMName $VMName -FirstBootDevice $DVDDrive -``` --## Complete the operating system deployment -In order to finish building the virtual machine, you need to finish the operating system installation. --1. Hyper-V Manager, double-click on the virtual machine -2. Click on the Start button. -3. You'll be prompted to ΓÇÿPress any key to boot from CD or DVDΓÇÖ. Go ahead and do so. -4. On the Windows Server start up screen select your language and click **Next**. -5. Click **Install Now**. -6. Enter your license key and click **Next**. -7. Check **I accept the license terms and click **Next**. -8. Select **Custom: Install Windows Only (Advanced)** -9. Click **Next** -10. Once the installation has completed, restart the virtual machine, sign-in and run Windows updates to ensure the VM is the most up-to-date. Install the latest updates. --## Install Active Directory prerequisites -Now that we have a virtual machine up, we need to do a few things prior to installing Active Directory. That is, we need to rename the virtual machine, set a static IP address and DNS information, and install the Remote Server Administration tools. Do the following: --1. Open up the PowerShell ISE as Administrator. -2. Run the following script. --```powershell -#Declare variables -$ipaddress = "10.0.1.117" -$ipprefix = "24" -$ipgw = "10.0.1.1" -$ipdns = "10.0.1.117" -$ipdns2 = "8.8.8.8" -$ipif = (Get-NetAdapter).ifIndex -$featureLogPath = "c:\poshlog\featurelog.txt" -$newname = "DC1" -$addsTools = "RSAT-AD-Tools" --#Set static IP address -New-NetIPAddress -IPAddress $ipaddress -PrefixLength $ipprefix -InterfaceIndex $ipif -DefaultGateway $ipgw --# Set the DNS servers -Set-DnsClientServerAddress -InterfaceIndex $ipif -ServerAddresses ($ipdns, $ipdns2) --#Rename the computer -Rename-Computer -NewName $newname -force --#Install features -New-Item $featureLogPath -ItemType file -Force -Add-WindowsFeature $addsTools -Get-WindowsFeature | Where installed >>$featureLogPath --#Restart the computer -Restart-Computer -``` ++To create a hybrid identity environment, the first task is to create a virtual machine to use as an on-premises Windows Server AD server. ++> [!NOTE] +> If you've never run a script in PowerShell on your host machine, before you run any scripts, open Windows PowerShell ISE as administrator and run `Set-ExecutionPolicy remotesigned`. In the **Execution Policy Change** dialog, select **Yes**. ++To create the virtual machine: ++1. Open Windows PowerShell ISE as administrator. +1. Run the following script: ++ ```powershell + #Declare variables + $VMName = 'DC1' + $Switch = 'External' + $InstallMedia = 'D:\ISO\en_windows_server_2016_updated_feb_2018_x64_dvd_11636692.iso' + $Path = 'D:\VM' + $VHDPath = 'D:\VM\DC1\DC1.vhdx' + $VHDSize = '64424509440' + + #Create a new virtual machine + New-VM -Name $VMName -MemoryStartupBytes 16GB -BootDevice VHD -Path $Path -NewVHDPath $VHDPath -NewVHDSizeBytes $VHDSize -Generation 2 -Switch $Switch + + #Set the memory to be non-dynamic + Set-VMMemory $VMName -DynamicMemoryEnabled $false + + #Add a DVD drive to the virtual machine + Add-VMDvdDrive -VMName $VMName -ControllerNumber 0 -ControllerLocation 1 -Path $InstallMedia + + #Mount installation media + $DVDDrive = Get-VMDvdDrive -VMName $VMName + + #Configure the virtual machine to boot from the DVD + Set-VMFirmware -VMName $VMName -FirstBootDevice $DVDDrive + ``` ++## Install the operating system ++To finish creating the virtual machine, install the operating system: ++1. In Hyper-V Manager, double-click the virtual machine. +1. Select **Start**. +1. At the prompt, press any key to boot from CD or DVD. +1. In the Windows Server start window, select your language, and then select **Next**. +1. Select **Install Now**. +1. Enter your license key and select **Next**. +1. Select the **I accept the license terms** checkbox and select **Next**. +1. Select **Custom: Install Windows Only (Advanced)**. +1. Select **Next**. +1. When the installation is finished, restart the virtual machine. Sign in, and then check Windows Update. Install any updates to ensure that the VM is fully up-to-date. ++## Install Windows Server AD prerequisites ++Before you install Windows Server AD, run a script that installs prerequisites: ++1. Open Windows PowerShell ISE as administrator. +1. Run `Set-ExecutionPolicy remotesigned`. In the **Execution Policy Change** dialog, select **Yes to All**. +1. Run the following script: ++ ```powershell + #Declare variables + $ipaddress = "10.0.1.117" + $ipprefix = "24" + $ipgw = "10.0.1.1" + $ipdns = "10.0.1.117" + $ipdns2 = "8.8.8.8" + $ipif = (Get-NetAdapter).ifIndex + $featureLogPath = "c:\poshlog\featurelog.txt" + $newname = "DC1" + $addsTools = "RSAT-AD-Tools" + + #Set a static IP address + New-NetIPAddress -IPAddress $ipaddress -PrefixLength $ipprefix -InterfaceIndex $ipif -DefaultGateway $ipgw + + # Set the DNS servers + Set-DnsClientServerAddress -InterfaceIndex $ipif -ServerAddresses ($ipdns, $ipdns2) + + #Rename the computer + Rename-Computer -NewName $newname -force + + #Install features + New-Item $featureLogPath -ItemType file -Force + Add-WindowsFeature $addsTools + Get-WindowsFeature | Where installed >>$featureLogPath + + #Restart the computer + Restart-Computer + ``` ## Create a Windows Server AD environment-Now that we have the VM created and it has been renamed and has a static IP address, we can go ahead and install and configure Active Directory Domain Services. Do the following: --1. Open up the PowerShell ISE as Administrator. -2. Run the following script. --```powershell -#Declare variables -$DatabasePath = "c:\windows\NTDS" -$DomainMode = "WinThreshold" -$DomainName = "contoso.com" -$DomaninNetBIOSName = "CONTOSO" -$ForestMode = "WinThreshold" -$LogPath = "c:\windows\NTDS" -$SysVolPath = "c:\windows\SYSVOL" -$featureLogPath = "c:\poshlog\featurelog.txt" -$Password = "Pass1w0rd" -$SecureString = ConvertTo-SecureString $Password -AsPlainText -Force --#Install AD DS, DNS and GPMC -start-job -Name addFeature -ScriptBlock { -Add-WindowsFeature -Name "ad-domain-services" -IncludeAllSubFeature -IncludeManagementTools -Add-WindowsFeature -Name "dns" -IncludeAllSubFeature -IncludeManagementTools -Add-WindowsFeature -Name "gpmc" -IncludeAllSubFeature -IncludeManagementTools } -Wait-Job -Name addFeature -Get-WindowsFeature | Where installed >>$featureLogPath --#Create New AD Forest -Install-ADDSForest -CreateDnsDelegation:$false -DatabasePath $DatabasePath -DomainMode $DomainMode -DomainName $DomainName -SafeModeAdministratorPassword $SecureString -DomainNetbiosName $DomainNetBIOSName -ForestMode $ForestMode -InstallDns:$true -LogPath $LogPath -NoRebootOnCompletion:$false -SysvolPath $SysVolPath -Force:$true -``` ++Now, install and configure Active Directory Domain Services to create the environment: ++1. Open Windows PowerShell ISE as administrator. +1. Run the following script: ++ ```powershell + #Declare variables + $DatabasePath = "c:\windows\NTDS" + $DomainMode = "WinThreshold" + $DomainName = "contoso.com" + $DomainNetBIOSName = "CONTOSO" + $ForestMode = "WinThreshold" + $LogPath = "c:\windows\NTDS" + $SysVolPath = "c:\windows\SYSVOL" + $featureLogPath = "c:\poshlog\featurelog.txt" + $Password = "Pass1w0rd" + $SecureString = ConvertTo-SecureString $Password -AsPlainText -Force + + #Install Active Directory Domain Services, DNS, and Group Policy Management Console + start-job -Name addFeature -ScriptBlock { + Add-WindowsFeature -Name "ad-domain-services" -IncludeAllSubFeature -IncludeManagementTools + Add-WindowsFeature -Name "dns" -IncludeAllSubFeature -IncludeManagementTools + Add-WindowsFeature -Name "gpmc" -IncludeAllSubFeature -IncludeManagementTools } + Wait-Job -Name addFeature + Get-WindowsFeature | Where installed >>$featureLogPath + + #Create a new Windows Server AD forest + Install-ADDSForest -CreateDnsDelegation:$false -DatabasePath $DatabasePath -DomainMode $DomainMode -DomainName $DomainName -SafeModeAdministratorPassword $SecureString -DomainNetbiosName $DomainNetBIOSName -ForestMode $ForestMode -InstallDns:$true -LogPath $LogPath -NoRebootOnCompletion:$false -SysvolPath $SysVolPath -Force:$true + ``` ## Create a Windows Server AD user-Now that we have our Active Directory environment, we need to a test account. This account will be created in our on-premises AD environment and then synchronized to Azure AD. Do the following: -1. Open up the PowerShell ISE as Administrator. -2. Run the following script. +Next, create a test user account. Create this account in your on-premises Active Directory environment. The account is then synced to Azure Active Directory (Azure AD). ++1. Open Windows PowerShell ISE as administrator. +1. Run the following script: ++ ```powershell + #Declare variables + $Givenname = "Allie" + $Surname = "McCray" + $Displayname = "Allie McCray" + $Name = "amccray" + $Password = "Pass1w0rd" + $Identity = "CN=ammccray,CN=Users,DC=contoso,DC=com" + $SecureString = ConvertTo-SecureString $Password -AsPlainText -Force + + #Create the user + New-ADUser -Name $Name -GivenName $Givenname -Surname $Surname -DisplayName $Displayname -AccountPassword $SecureString + + #Set the password to never expire + Set-ADUser -Identity $Identity -PasswordNeverExpires $true -ChangePasswordAtLogon $false -Enabled $true + ``` ++## Create an Azure AD tenant -```powershell -#Declare variables -$Givenname = "Allie" -$Surname = "McCray" -$Displayname = "Allie McCray" -$Name = "amccray" -$Password = "Pass1w0rd" -$Identity = "CN=ammccray,CN=Users,DC=contoso,DC=com" -$SecureString = ConvertTo-SecureString $Password -AsPlainText -Force +Now, create an Azure AD tenant, so you can sync your users in Azure: +1. In the [Azure portal](https://portal.azure.com), sign in with the account that's associated with your Azure subscription. +1. Search for and then select **Azure Active Directory**. +1. Select **Create**. -#Create the user -New-ADUser -Name $Name -GivenName $Givenname -Surname $Surname -DisplayName $Displayname -AccountPassword $SecureString + :::image type="content" source="media/tutorial-password-hash-sync/create1.png" alt-text="Screenshot that shows how to create an Azure AD tenant."::: +1. Enter a name for the organization and an initial domain name. Then select **Create** to create your directory. +1. To manage the directory, select the **here** link. -#Set the password to never expire -Set-ADUser -Identity $Identity -PasswordNeverExpires $true -ChangePasswordAtLogon $false -Enabled $true -``` +## Create a Hybrid Identity Administrator in Azure AD -## Create an Azure AD tenant -Now we need to create an Azure AD tenant so that we can synchronize our users to the cloud. To create a new Azure AD tenant, do the following. +The next task is to create a Hybrid Identity Administrator account. This account is used to create the Azure AD Connector account during Azure AD Connect installation. The Azure AD Connector account is used to write information to Azure AD. -1. Browse to the [Azure portal](https://portal.azure.com) and sign in with an account that has an Azure subscription. -2. Select the **plus icon (+)** and search for **Azure Active Directory**. -3. Select **Azure Active Directory** in the search results. -4. Select **Create**.</br> -</br> -5. Provide a **name for the organization** along with the **initial domain name**. Then select **Create**. This will create your directory. -6. Once this has completed, click the **here** link, to manage the directory. +To create the Hybrid Identity Administrator account: -## Create a Hybrid Identity Administrator in Azure AD -Now that we have an Azure AD tenant, we'll create a Hybrid Identity Administratoristrator account. This account is used to create the Azure AD Connector account during Azure AD Connect installation. The Azure AD Connector account is used to write information to Azure AD. To create the Hybrid Identity Administrator account do the following. +1. In the left menu under **Manage**, select **Users**. ++ :::image type="content" source="media/tutorial-password-hash-sync/gadmin1.png" alt-text="Screenshot that shows Users selected under Manage in the resource menu to create a Hybrid Identity Administrator in Azure AD."::: +1. Select **All users**, and then select **New user**. +1. In the **User** pane, enter a name and a username for the new user. You're creating your Hybrid Identity Administrator account for the tenant. You can show and copy the temporary password. -1. Under **Manage**, select **Users**.</br> -</br> -2. Select **All users** and then select **+ New user**. -3. Provide a name and username for this user. This will be your Hybrid Identity Administrator for the tenant. You'll also want to change the **Directory role** to **Hybrid Identity Administrator.** You can also show the temporary password. When you are done, select **Create**.</br> -</br> -4. Once this has completed, open a new web browser and sign-in to myapps.microsoft.com using the new Hybrid Identity Administrator account and the temporary password. -5. Change the password for the Hybrid Identity Administrator to something that you'll remember. + In the **Directory role** pane, select **Hybrid Identity Administrator**. Then select **Create**. ++ :::image type="content" source="media/tutorial-password-hash-sync/gadmin2.png" alt-text="Screenshot that shows the Create button you select when you create a Hybrid Identity Administrator account in Azure AD."::: +1. In a new web browser window, sign in to `myapps.microsoft.com` by using the new Hybrid Identity Administrator account and the temporary password. +1. Choose a new password for the Hybrid Identity Administrator account and change the password. ## Download and install Azure AD Connect-Now it's time to download and install Azure AD Connect. Once it has been installed we'll run through the express installation. Do the following: -1. Download [Azure AD Connect](https://www.microsoft.com/download/details.aspx?id=47594) -2. Navigate to and double-click **AzureADConnect.msi**. -3. On the Welcome screen, select the box agreeing to the licensing terms and click **Continue**. -4. On the Express settings screen, click **Use express settings**.</br> -</br> -5. On the Connect to Azure AD screen, enter the username and password the Hybrid Identity Administrator for Azure AD. Click **Next**. -6. On the Connect to AD DS screen, enter the username and password for an enterprise admin account. Click **Next**. -7. On the Ready to configure screen, click **Install**. -8. When the installation completes, click **Exit**. -9. After the installation has completed, sign out and sign in again before you use the Synchronization Service Manager or Synchronization Rule Editor. +Now it's time to download and install Azure AD Connect. After it's installed, you'll use the express installation. ++1. Download [Azure AD Connect](https://www.microsoft.com/download/details.aspx?id=47594). +1. Go to *AzureADConnect.msi* and double-click to open the installation file. +1. In **Welcome**, select the checkbox to agree to the licensing terms and select **Continue**. +1. In **Express settings**, select **Use express settings**. ++ :::image type="content" source="media/tutorial-password-hash-sync/express1.png" alt-text="Screenshot that shows the Express settings screen and the Use express settings button."::: +1. In **Connect to Azure AD**, enter the username and password for the Hybrid Identity Administrator account for Azure AD. Select **Next**. +1. In **Connect to AD DS**, enter the username and password for an enterprise admin account. Select **Next**. +1. In **Ready to configure**, select **Install**. +1. When the installation is finished, select **Exit**. +1. Before you use Synchronization Service Manager or Synchronization Rule Editor, sign out, and then sign in again. ++## Check for users in the portal +Now you'll verify that the users in your on-premises Active Directory tenant have synced and are now in your Azure AD tenant. This section might take a few hours to complete. -## Verify users are created and synchronization is occurring -We'll now verify that the users that we had in our on-premises directory have been synchronized and now exist in out Azure AD tenant. Be aware that this may take a few hours to complete. To verify users are synchronized do the following. +To verify that the users are synced: +1. In the [Azure portal](https://portal.azure.com), sign in to the account that's associated with your Azure subscription. +1. In the portal menu, select **Azure Active Directory**. +1. In the resource menu under **Manage**, select **Users**. +1. Verify that the new users appear in your tenant. -1. Browse to the [Azure portal](https://portal.azure.com) and sign in with an account that has an Azure subscription. -2. On the left, select **Azure Active Directory** -3. Under **Manage**, select **Users**. -4. Verify that you see the new users in our tenant</br> -</br> + :::image type="content" source="media/tutorial-password-hash-sync/sync1.png" alt-text="Screenshot that shows verifying that users were synced in Azure Active Directory."::: + +## Sign in with a user account to test sync -## Test signing in with one of our users +To test that users from your Windows Server AD tenant are synced with your Azure AD tenant, sign in as one of the users: -1. Browse to [https://myapps.microsoft.com](https://myapps.microsoft.com) -2. Sign-in with a user account that was created in our new tenant. You'll need to sign-in using the following format: (user@domain.onmicrosoft.com). Use the same password that the user uses to sign-in on-premises.</br> - </br> +1. Go to [https://myapps.microsoft.com](https://myapps.microsoft.com). +1. Sign in with a user account that was created in your new tenant. -You have now successfully setup a hybrid identity environment that you can use to test and familiarize yourself with what Azure has to offer. + For the username, use the format `user@domain.onmicrosoft.com`. Use the same password the user uses to sign in to on-premises Active Directory. -## Next Steps +You've successfully set up a hybrid identity environment that you can use to test and to get familiar with what Azure has to offer. +## Next steps -- [Hardware and prerequisites](how-to-connect-install-prerequisites.md) -- [Express settings](how-to-connect-install-express.md)-- [Password hash synchronization](how-to-connect-password-hash-synchronization.md)|+- Review [Azure AD Connect hardware and prerequisites](how-to-connect-install-prerequisites.md). +- Learn how to use [Express settings](how-to-connect-install-express.md) in Azure AD Connect. +- Learn more about [password hash sync](how-to-connect-password-hash-synchronization.md) with Azure AD Connect. |
active-directory | Tutorial Phs Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tutorial-phs-backup.md | Title: 'Tutorial: Setting up PHS as backup for AD FS in Azure AD Connect | Microsoft Docs' -description: Demonstrates how to turn on password hash sync as a backup and for AD FS. + Title: 'Tutorial: Set up password hash sync as backup for AD FS in Azure AD Connect' +description: Learn how to turn on password hash sync as a backup for Azure Directory Federation Services (AD FS) in Azure AD Connect. -# Tutorial: Setting up PHS as backup for AD FS in Azure AD Connect +# Tutorial: Set up password hash sync as backup for Azure Directory Federation Services -The following tutorial will walk you through setting up password hash sync as a backup and fail-over for AD FS. This document will also demonstrate how to enable password hash sync as the primary authentication method, if AD FS has failed or become unavailable. +This tutorial walks you through the steps to set up password hash sync as a backup and failover for Azure Directory Federation Services (AD FS) in Azure AD Connect. The tutorial also demonstrates how to set password hash sync as the primary authentication method if AD FS fails or becomes unavailable. ->[!NOTE] ->Although these steps are usually performed during emergency or outage situations, it is recommended that you test these steps and verify your procedures before an outage occurs. -->[!NOTE] ->In the event that you do not have access to Azure AD Connect server or the server does not have access to the internet, you can contact [Microsoft Support](https://support.microsoft.com/en-us/contactus/) to assist with the changes to the Azure AD side. +> [!NOTE] +> Although these steps usually are taken in an emergency or outage situation, we recommend that you test these steps and verify your procedures before an outage occurs. ## Prerequisites-This tutorial builds upon the [Tutorial: Federate a single AD forest environment to the cloud](tutorial-federation.md) and is a per-requisite before attempting this tutorial. If you have not completed this tutorial, do so before attempting the steps in this document. ->[!IMPORTANT] ->Prior to switching to PHS you should create a backup of your AD FS environment. This can be done using the [AD FS Rapid Restore Tool](/windows-server/identity/ad-fs/operations/ad-fs-rapid-restore-tool#how-to-use-the-tool). +This tutorial builds on [Tutorial: Use federation for hybrid identity in a single Active Directory forest](tutorial-federation.md). Completing the tutorial is a prerequisite to completing the steps in this tutorial. ++> [!NOTE] +> If you don't have access to an Azure AD Connect server or the server doesn't have internet access, you can contact [Microsoft Support](https://support.microsoft.com/contactus/) to assist with the changes to Azure Active Directory (Azure AD). ++## Enable password hash sync in Azure AD Connect ++In [Tutorial: Use federation for hybrid identity in a single Active Directory forest](tutorial-federation.md), you created an Azure AD Connect environment that's using federation. ++Your first step in setting up your backup for federation is to turn on password hash sync and set Azure AD Connect to sync the hashes: -## Enable PHS in Azure AD Connect -The first step, now that we have an Azure AD Connect environment that is using federation, is to turn on password hash sync and allow Azure AD Connect to synchronize the hashes. +1. Double-click the Azure AD Connect icon that was created on the desktop during installation. +1. Select **Configure**. +1. In **Additional tasks**, select **Customize synchronization options**, and then select **Next**. -Do the following: + :::image type="content" source="media/tutorial-phs-backup/backup2.png" alt-text="Screenshot that shows the Additional tasks pane, with Customize synchronization options selected."::: +1. Enter the username and password for the [Hybrid Identity Administrator account you created](tutorial-federation.md#create-a-hybrid-identity-administrator-account-in-azure-ad) in the tutorial to set up federation. +1. In **Connect your directories**, select **Next**. +1. In **Domain and OU filtering**, select **Next**. +1. In **Optional features**, select **Password hash synchronization**, and then select **Next**. -1. Double-click the Azure AD Connect icon that was created on the desktop -2. Click **Configure**. -3. On the Additional tasks page, select **Customize synchronization options** and click **Next**. -4. Enter the username and password for your Hybrid Identity Administrator or your hybrid identity administrator. This account was created [here](tutorial-federation.md#create-a-hybrid-identity-administrator-in-azure-ad) in the previous tutorial. -5. On the **Connect your directories** screen, click **Next**. -6. On the **Domain and OU filtering** screen, click **Next**. -7. On the **Optional features** screen, check **Password hash synchronization** and click **Next**. -</br> -8. On the **Ready to configure** screen click **Configure**. -9. Once the configuration completes, click **Exit**. -10. That's it! You are done. Password hash synchronization will now occur and can be used as a backup if AD FS becomes unavailable. + :::image type="content" source="media/tutorial-phs-backup/backup1.png" alt-text="Screenshot that shows the Optional features pane, with Password hash synchronization selected."::: +1. In **Ready to configure**, select **Configure**. +1. When configuration is finished, select **Exit**. -## Switch to password hash synchronization -Now, we will show you how to switch over to password hash synchronization. Before you start, consider under which conditions should you make the switch. Don't make the switch for temporary reasons, like a network outage, a minor AD FS problem, or a problem that affects a subset of your users. If you decide to make the switch because fixing the problem will take too long, do the following: +That's it! You're done. Password hash sync will now occur, and it can be used as a backup if AD FS becomes unavailable. ++## Switch to password hash sync > [!IMPORTANT]-> Be aware that it will take some time for the password hashes to synchronize to Azure AD. This means that it may take up 3 hours for the synchronizations to complete and before you can start authenticating using the password hashes. --1. Double-click the Azure AD Connect icon that was created on the desktop -2. Click **Configure**. -3. Select **Change user sign-in** and click **Next**. -</br> -4. Enter the username and password for your Hybrid Identity Administratoristrator or your hybrid identity administrator. This account was created [here](tutorial-federation.md#create-a-hybrid-identity-administrator-in-azure-ad) in the previous tutorial. -5. On the **User sign-in** screen, select **Password Hash Synchronization** and place a check in the **Do not convert user accounts** box. -6. Leave the default **Enable single sign-on** selected and click **Next**. -7. On the **Enable single sign-on** screen click **Next**. -8. On the **Ready to configure** screen, click **Configure**. -9. Once configuration is complete, click **Exit**. -10. Users can now use their passwords to sign in to Azure and Azure services. --## Test signing in with one of our users --1. Browse to [https://myapps.microsoft.com](https://myapps.microsoft.com) -2. Sign in with a user account that was created in our new tenant. You will need to sign in using the following format: (user@domain.onmicrosoft.com). Use the same password that the user uses to sign in on-premises.</br> - </br> +> +> - Before you switch to password hash sync, create a backup of your AD FS environment. You can create a backup by using the [AD FS Rapid Restore Tool](/windows-server/identity/ad-fs/operations/ad-fs-rapid-restore-tool#how-to-use-the-tool). +> +> - It takes some time for the password hashes to sync to Azure AD. It might be up to three hours before the sync finishes and you can start authenticating by using the password hashes. ++Next, switch over to password hash synchronization. Before you start, consider in which conditions you should make the switch. Don't make the switch for temporary reasons, like a network outage, a minor AD FS problem, or a problem that affects a subset of your users. ++If you decide to make the switch because fixing the problem will take too long, complete these steps: ++1. In Azure AD Connect, select **Configure**. +1. Select **Change user sign-in**, and then select **Next**. +1. Enter the username and password for the [Hybrid Identity Administrator account you created](tutorial-federation.md#create-a-hybrid-identity-administrator-account-in-azure-ad) in the tutorial to set up federation. +1. In **User sign-in**, select **Password hash synchronization**, and then select the **Do not convert user accounts** checkbox. +1. Leave the default **Enable single sign-on** selected and select **Next**. +1. In **Enable single sign-on**, select **Next**. +1. In **Ready to configure**, select **Configure**. +1. When configuration is finished, select **Exit**. ++Users can now use their passwords to sign in to Azure and Azure services. ++## Sign in with a user account to test sync ++1. In a new web browser window, go to [https://myapps.microsoft.com](https://myapps.microsoft.com). +1. Sign in with a user account that was created in your new tenant. ++ For the username, use the format `user@domain.onmicrosoft.com`. Use the same password the user uses to sign in to on-premises Active Directory. ++ :::image type="content" source="media/tutorial-federation/verify1.png" alt-text="Screenshot that shows a successful message when testing the sign-in."::: ## Switch back to federation-Now, we will show you how to switch back to federation. To do this, do the following: --1. Double-click the Azure AD Connect icon that was created on the desktop -2. Click **Configure**. -3. Select **Change user sign-in** and click **Next**. -4. Enter the username and password for your Hybrid Identity Administrator or your hybrid identity administrator. This is the account that was created [here](tutorial-federation.md#create-a-hybrid-identity-administrator-in-azure-ad) in the previous tutorial. -5. On the **User sign-in** screen, select **Federation with AD FS** and click **Next**. -6. On the Domain Administrator credentials page, enter the contoso\Administrator username and password and click **Next.** -7. On the AD FS farm screen, click **Next**. -8. On the **Azure AD domain** screen, select the domain from the drop-down and click **Next**. -9. On the **Ready to configure** screen, click **Configure**. -10. Once configuration is complete, click **Next**. -</br> -11. On the **Verify federation connectivity** screen, click **Verify**. You may need to configure DNS records (add A and AAAA records) for this to complete successfully. -</br> -12. Click **Exit**. ++Now, switch back to federation: ++1. In Azure AD Connect, select **Configure**. +1. Select **Change user sign-in**, and then select **Next**. +1. Enter the username and password for your Hybrid Identity Administrator account. +1. In **User sign-in**, select **Federation with AD FS**, and then select **Next**. +1. In **Domain Administrator credentials**, enter the contoso\Administrator username and password, and then select **Next.** +1. In **AD FS farm**, select **Next**. +1. In **Azure AD domain**, select the domain and select **Next**. +1. In **Ready to configure**, select **Configure**. +1. When configuration is finished, select **Next**. ++ :::image type="content" source="media/tutorial-phs-backup/backup4.png" alt-text="Screenshot that shows the Configuration complete pane."::: +1. In **Verify federation connectivity**, select **Verify**. You might need to configure DNS records (add A and AAAA records) for verification to finish successfully. ++ :::image type="content" source="media/tutorial-phs-backup/backup5.png" alt-text="Screenshot that shows the Verify federation connectivity dialog and the Verify button."::: +1. Select **Exit**. ## Reset the AD FS and Azure trust-Now we need to reset the trust between AD FS and Azure. -1. Double-click the Azure AD Connect icon that was created on the desktop -2. Click **Configure**. -3. Select **Manage Federation** and click **Next**. -4. Select **Reset Azure AD trust** and click **Next**. -</br> -5. On the **Connect to Azure AD** screen enter the username and password for your global administrator or your hybrid identity administrator. -6. On the **Connect to AD FS** screen, enter the contoso\Administrator username and password and click **Next.** -7. On the **Certificates** screen, click **Next**. +The final task is to reset the trust between AD FS and Azure: -## Test signing in with a user +1. In Azure AD Connect, select **Configure**. +1. Select **Manage federation**, and then select **Next**. +1. Select **Reset Azure AD trust**, and then select **Next**. -1. Browse to [https://myapps.microsoft.com](https://myapps.microsoft.com) -2. Sign-in with a user account that was created in our new tenant. You will need to sign-in using the following format: (user@domain.onmicrosoft.com). Use the same password that the user uses to sign-in on-premises. - + :::image type="content" source="media/tutorial-phs-backup/backup6.png" alt-text="Screenshot that shows the Manage federation pane, with Reset Azure AD selected."::: +1. In **Connect to Azure AD**, enter the username and password for your Global Administrator account or your Hybrid Identity Administrator account. +1. In **Connect to AD FS**, enter the contoso\Administrator username and password, and then select **Next.** +1. In **Certificates**, select **Next**. +1. Repeat the steps in [Sign in with a user account to test sync](#sign-in-with-a-user-account-to-test-sync). -You have now successfully setup a hybrid identity environment that you can use to test and familiarize yourself with what Azure has to offer. +You've successfully set up a hybrid identity environment that you can use to test and to get familiar with what Azure has to offer. ## Next steps --- [Hardware and prerequisites](how-to-connect-install-prerequisites.md) -- [Express settings](how-to-connect-install-express.md)-- [Password hash synchronization](how-to-connect-password-hash-synchronization.md)+- Review [Azure AD Connect hardware and prerequisites](how-to-connect-install-prerequisites.md). +- Learn how to use [Express settings](how-to-connect-install-express.md) in Azure AD Connect. +- Learn more about [password hash sync](how-to-connect-password-hash-synchronization.md) with Azure AD Connect. |
active-directory | Manage Application Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-application-permissions.md | $spApplicationPermissions = Get-AzureADServicePrincipalAppRoleAssignedTo -Object # Remove all application permissions $spApplicationPermissions | ForEach-Object {- Remove-AzureADServicePrincipalAppRoleAssignment -ObjectId $_.PrincipalId -AppRoleAssignmentId $_.objectId + Remove-AzureADServiceAppRoleAssignment -ObjectId $_.PrincipalId -AppRoleAssignmentId $_.objectId } ``` Run the following queries to remove appRoleAssignments of users or groups to the ## Next steps - [Configure user consent setting](configure-user-consent.md)-- [Configure admin consent workflow](configure-admin-consent-workflow.md)+- [Configure admin consent workflow](configure-admin-consent-workflow.md) |
active-directory | Cross Tenant Synchronization Configure Graph | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure-graph.md | This article describes the key steps to configure cross-tenant synchronization u ## Prerequisites -- A source [Azure AD tenant](../develop/quickstart-create-new-tenant.md) with a Premium P1 or P2 license-- A target [Azure AD tenant](../develop/quickstart-create-new-tenant.md) with a Premium P1 or P2 license-- An account in the source tenant with the [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator) role to configure cross-tenant provisioning-- An account in the target tenant with the [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator) role to configure the cross-tenant synchronization policy+### Source tenant -## Step 1: Sign in to the target tenant and consent to permissions +- Azure AD Premium P1 or P2 license +- [Security Administrator](../roles/permissions-reference.md#security-administrator) role to configure cross-tenant access settings +- [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator) role to configure cross-tenant synchronization +- [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator) or [Application Administrator](../roles/permissions-reference.md#application-administrator) role to assign users to a configuration and to delete a configuration +- [Global Administrator](../roles/permissions-reference.md#global-administrator) role to consent to required permissions -<br/>**Target tenant** +### Target tenant ++- Azure AD Premium P1 or P2 license +- [Security Administrator](../roles/permissions-reference.md#security-administrator) role to configure cross-tenant access settings +- [Global Administrator](../roles/permissions-reference.md#global-administrator) role to consent to required permissions ++## Step 1: Sign in to tenants and consent to permissions ++ <br/>**Source and target tenants** These steps describe how to use Microsoft Graph Explorer (recommended), but you can also use Postman, or another REST API client. 1. Start [Microsoft Graph Explorer tool](https://aka.ms/ge). -1. Sign in to the target tenant. +1. Sign in to the source tenant. ++1. Select your profile and then select **Consent to permissions**. -1. Select **Modify permissions**. + :::image type="content" source="./media/cross-tenant-synchronization-configure-graph/graph-explorer-profile.png" alt-text="Screenshot of Graph Explorer profile with Consent to permissions link." lightbox="./media/cross-tenant-synchronization-configure-graph/graph-explorer-profile.png"::: 1. Consent to the following required permissions: - `Policy.Read.All` - `Policy.ReadWrite.CrossTenantAccess`+ - `Application.ReadWrite.All` + - `Directory.ReadWrite.All` ++ If you see a **Need admin approval** page, you'll need to sign in with a user that has been assigned the Global Administrator role to consent. ++1. Start another instance of [Microsoft Graph Explorer tool](https://aka.ms/ge). ++1. Sign in to the source tenant. ++1. Consent to the following required permissions: ++ - `Policy.Read.All` + - `Policy.ReadWrite.CrossTenantAccess` ++1. Get the tenant ID of the source and target tenants. The example configuration described in this article uses the following tenant IDs: ++ - Source tenant ID: 3d0f5dec-5d3d-455c-8016-e2af1ae4d31a + - Target tenant ID: 376a1f89-b02f-4a85-8252-2974d1984d14 ## Step 2: Enable user synchronization in the target tenant <br/>**Target tenant** -1. Use the [Create crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicy-post-partners?view=graph-rest-beta&preserve-view=true) API to create a new partner configuration in a cross-tenant access policy between the target tenant and the source tenant. +1. In the target tenant, use the [Create crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicy-post-partners?view=graph-rest-beta&preserve-view=true) API to create a new partner configuration in a cross-tenant access policy between the target tenant and the source tenant. Use the source tenant ID in the request. **Request** These steps describe how to use Microsoft Graph Explorer (recommended), but you Content-Type: application/json {+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#policies/crossTenantAccessPolicy/partners/$entity", "tenantId": "3d0f5dec-5d3d-455c-8016-e2af1ae4d31a", "isServiceProvider": null, "inboundTrust": null, These steps describe how to use Microsoft Graph Explorer (recommended), but you Content-type: application/json {+ "displayName": "Fabrikam", "userSyncInbound": { "isSyncAllowed": true These steps describe how to use Microsoft Graph Explorer (recommended), but you <br/>**Target tenant** -1. Use the [Update crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicyconfigurationpartner-update?view=graph-rest-beta&preserve-view=true) API to automatically redeem invitations and suppress consent prompts for inbound access. +1. In the target tenant, use the [Update crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicyconfigurationpartner-update?view=graph-rest-beta&preserve-view=true) API to automatically redeem invitations and suppress consent prompts for inbound access. **Request** These steps describe how to use Microsoft Graph Explorer (recommended), but you <br/>**Source tenant** -1. Sign in to the source tenant. --2. Use the [Create crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicy-post-partners?view=graph-rest-beta&preserve-view=true) API to create a new partner configuration in a cross-tenant access policy between the source tenant and the target tenant. +1. In the source tenant, use the [Create crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicy-post-partners?view=graph-rest-beta&preserve-view=true) API to create a new partner configuration in a cross-tenant access policy between the source tenant and the target tenant. Use the target tenant ID in the request. **Request** These steps describe how to use Microsoft Graph Explorer (recommended), but you Content-Type: application/json {+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#policies/crossTenantAccessPolicy/partners/$entity", "tenantId": "376a1f89-b02f-4a85-8252-2974d1984d14", "isServiceProvider": null, "inboundTrust": null, These steps describe how to use Microsoft Graph Explorer (recommended), but you } ``` -3. Use the [Update crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicyconfigurationpartner-update?view=graph-rest-beta&preserve-view=true) API to automatically redeem invitations and suppress consent prompts for outbound access. +1. Use the [Update crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicyconfigurationpartner-update?view=graph-rest-beta&preserve-view=true) API to automatically redeem invitations and suppress consent prompts for outbound access. **Request** These steps describe how to use Microsoft Graph Explorer (recommended), but you Content-type: application/json {+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#microsoft.graph.applicationServicePrincipal", "application": { "objectId": "{objectId}", "appId": "{appId}", These steps describe how to use Microsoft Graph Explorer (recommended), but you HTTP/1.1 204 No Content ``` -## Step 7: Assign a user to the configuration --<br/>**Source tenant** --For cross-tenant synchronization to work, at least one internal user must be assigned to the configuration. --1. In the source tenant, use the [Grant an appRoleAssignment for a service principal](/graph/api/serviceprincipal-post-approleassignedto) API to assign an internal user to the configuration. -- **Request** - - ```http - POST https://graph.microsoft.com/v1.0/servicePrincipals/{servicePrincipalId}/appRoleAssignedTo - Content-type: application/json - - { - "appRoleId": "{appRoleId}", - "resourceId": "{servicePrincipalId}", - "principalId": "{principalId}" - } - ``` -- **Response** - - ```http - HTTP/1.1 201 Created - Content-Type: application/json - { - "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#servicePrincipals('{servicePrincipalId}')/appRoleAssignedTo/$entity", - "id": "{keyId}", - "deletedDateTime": null, - "appRoleId": "{appRoleId}", - "createdDateTime": "2022-11-27T22:23:48.6541804Z", - "principalDisplayName": "User1", - "principalId": "{principalId}", - "principalType": "User", - "resourceDisplayName": "Fabrikam", - "resourceId": "{servicePrincipalId}" - } - ``` --## Step 8: Create a provisioning job in the source tenant +## Step 7: Create a provisioning job in the source tenant <br/>**Source tenant** In the source tenant, to enable provisioning, create a provisioning job. Content-type: application/json {+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#servicePrincipals('{servicePrincipalId}')/synchronization/jobs/$entity", "id": "{jobId}", "templateId": "Azure2Azure", "schedule": { In the source tenant, to enable provisioning, create a provisioning job. } ``` -## Step 9: Save your credentials +## Step 8: Save your credentials <br/>**Source tenant** -1. Use the [synchronization: secrets](/graph/api/synchronization-synchronization-secrets?view=graph-rest-beta&preserve-view=true) API to save your credentials. +1. In the source tenant, use the [synchronization: secrets](/graph/api/synchronization-synchronization-secrets?view=graph-rest-beta&preserve-view=true) API to save your credentials. **Request** In the source tenant, to enable provisioning, create a provisioning job. HTTP/1.1 204 No Content ``` +## Step 9: Assign a user to the configuration ++<br/>**Source tenant** ++For cross-tenant synchronization to work, at least one internal user must be assigned to the configuration. ++1. In the source tenant, use the [Grant an appRoleAssignment for a service principal](/graph/api/serviceprincipal-post-approleassignedto) API to assign an internal user to the configuration. ++ **Request** + + ```http + POST https://graph.microsoft.com/v1.0/servicePrincipals/{servicePrincipalId}/appRoleAssignedTo + Content-type: application/json + + { + "appRoleId": "{appRoleId}", + "resourceId": "{servicePrincipalId}", + "principalId": "{principalId}" + } + ``` ++ **Response** + + ```http + HTTP/1.1 201 Created + Content-Type: application/json + { + "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#servicePrincipals('{servicePrincipalId}')/appRoleAssignedTo/$entity", + "id": "{keyId}", + "deletedDateTime": null, + "appRoleId": "{appRoleId}", + "createdDateTime": "2022-11-27T22:23:48.6541804Z", + "principalDisplayName": "User1", + "principalId": "{principalId}", + "principalType": "User", + "resourceDisplayName": "Fabrikam", + "resourceId": "{servicePrincipalId}" + } + ``` + ## Step 10: Test provision on demand <br/>**Source tenant** Now that you have a configuration, you can test on-demand provisioning with one of your users. -1. Use the [synchronizationJob: provisionOnDemand](/graph/api/synchronization-synchronizationjob-provision-on-demand?view=graph-rest-beta&preserve-view=true) API to provision a test user on demand. +1. In the source tenant, use the [synchronizationJob: provisionOnDemand](/graph/api/synchronization-synchronizationjob-provision-on-demand?view=graph-rest-beta&preserve-view=true) API to provision a test user on demand. **Request** Now that you have a configuration, you can test on-demand provisioning with one <br/>**Source tenant** -1. Now that the provisioning job is configured, use the [Start synchronizationJob](/graph/api/synchronization-synchronizationjob-start?view=graph-rest-beta&preserve-view=true) API to start the provisioning job. +1. Now that the provisioning job is configured, in the source tenant, use the [Start synchronizationJob](/graph/api/synchronization-synchronizationjob-start?view=graph-rest-beta&preserve-view=true) API to start the provisioning job. **Request** Now that you have a configuration, you can test on-demand provisioning with one <br/>**Source tenant** -1. Now that the provisioning job is running, use the [Get synchronizationJob](/graph/api/synchronization-synchronizationjob-get?view=graph-rest-beta&preserve-view=true) API to monitor the progress of the current provisioning cycle as well as statistics to date such as the number of users and groups that have been created in the target system. +1. Now that the provisioning job is running, in the source tenant, use the [Get synchronizationJob](/graph/api/synchronization-synchronizationjob-get?view=graph-rest-beta&preserve-view=true) API to monitor the progress of the current provisioning cycle as well as statistics to date such as the number of users and groups that have been created in the target system. **Request** Either the signed-in user doesn't have sufficient privileges, or you need to con **Solution** -1. Make sure you're assigned the [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator) role or another Azure AD role with privileges. +1. Make sure you're assigned the required roles. See [Prerequisites](#prerequisites) earlier in this article. -2. In [Microsoft Graph Explorer tool](https://aka.ms/ge), make sure you consent to the required permissions: -- - `Policy.Read.All` - - `Policy.ReadWrite.CrossTenantAccess` +2. In [Microsoft Graph Explorer tool](https://aka.ms/ge), make sure you consent to the required permissions. See [Step 1: Sign in to tenants and consent to permissions](#step-1-sign-in-to-tenants-and-consent-to-permissions) earlier in this article. ## Next steps |
active-directory | Cross Tenant Synchronization Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure.md | By the end of this article, you'll be able to: ## Prerequisites -- A source [Azure AD tenant](../develop/quickstart-create-new-tenant.md) with a Premium P1 or P2 license-- A target [Azure AD tenant](../develop/quickstart-create-new-tenant.md) with a Premium P1 or P2 license-- An account in the source tenant with the [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator) role to configure cross-tenant provisioning-- An account in the target tenant with the [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator) role to configure the cross-tenant synchronization policy+### Source tenant ++- Azure AD Premium P1 or P2 license +- [Security Administrator](../roles/permissions-reference.md#security-administrator) role to configure cross-tenant access settings +- [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator) role to configure cross-tenant synchronization +- [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator) or [Application Administrator](../roles/permissions-reference.md#application-administrator) role to assign users to a configuration and to delete a configuration ++### Target tenant ++- Azure AD Premium P1 or P2 license +- [Security Administrator](../roles/permissions-reference.md#security-administrator) role to configure cross-tenant access settings ## Step 1: Plan your provisioning deployment |
active-directory | Overview Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-recommendations.md | Title: What is Azure Active Directory recommendations (preview)? | Microsoft Docs + Title: What is Azure Active Directory recommendations? | Microsoft Docs description: Provides a general overview of Azure Active Directory recommendations. -# What is Azure Active Directory recommendations (preview)? +# What is Azure Active Directory recommendations? -This feature is supported as part of a public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +Keeping track of all the settings and resources in your tenant can be overwhelming. The Azure Active Directory (Azure AD) recommendations feature helps monitor the status of your tenant so you don't have to. Azure AD recommendations helps ensure your tenant is in a secure and healthy state while also helping you maximize the value of the features available in Azure AD. -Keeping track of all the settings and resources in your tenant can be overwhelming. The Azure AD recommendations (preview) feature helps monitor the status of your tenant so you don't have to. Azure AD recommendations helps ensure your tenant is in a secure and healthy state while also helping you maximize the value of the features available in Azure AD. --The Azure AD recommendations feature provides you personalized insights with actionable guidance to: +The Azure AD recommendations feature provides you with personalized insights with actionable guidance to: - Help you identify opportunities to implement best practices for Azure AD-related features. - Improve the state of your Azure AD tenant. This article gives you an overview of how you can use Azure AD recommendations. ## What it is -Azure AD recommendations is the Azure AD specific implementation of [Azure Advisor](../../advisor/advisor-overview.md), which is a personalized cloud consultant that helps you follow best practices to optimize your Azure deployments. Azure Advisor analyzes your resource configuration and usage telemetry to recommend solutions that can help you improve the cost effectiveness, performance, reliability, and security of your Azure resources. +Azure AD recommendations is the Azure AD specific implementation of [Azure Advisor](../../advisor/advisor-overview.md), which is a personalized cloud consultant that helps you follow best practices to optimize your Azure deployments. Azure Advisor analyzes your resource configuration and usage data to recommend solutions that can help you improve the cost effectiveness, performance, reliability, and security of your Azure resources. *Azure AD recommendations* uses similar data to support you with the roll-out and management of Microsoft's best practices for Azure AD tenants to keep your tenant in a secure and healthy state. Azure AD recommendations provide a holistic view into your tenant's security, health, and usage. Azure AD recommendations is the Azure AD specific implementation of [Azure Advis On a daily basis, Azure AD analyzes the configuration of your tenant. During this analysis, Azure AD compares the data of a recommendation with the actual configuration of your tenant. If a recommendation is flagged as applicable to your tenant, the recommendation appears in the **Recommendations** section of the Azure AD Overview area. Recommendations are listed in order of priority so you can quickly determine where to focus first. -Recommendations contain a description, a summary of the value of addressing the recommendation, and a step-by-step action plan. If applicable, impacted resources that are associated with the recommendation are listed, so you can resolve each affected area. If a recommendation doesn't have any associated resources, the impacted resource type is *Tenant level*. so your step-by-step action plan impacts the entire tenant and not just a specific resource. +Recommendations contain a description, a summary of the value of addressing the recommendation, and a step-by-step action plan. If applicable, impacted resources associated with the recommendation are listed, so you can resolve each affected area. If a recommendation doesn't have any associated resources, the impacted resource type is *Tenant level*. so your step-by-step action plan impacts the entire tenant and not just a specific resource.  The recommendation's **Value** is an explanation of why completing the recommend The **Action plan** provides step-by-step instructions to implement a recommendation. May include links to relevant documentation or direct you to other pages in the Azure AD portal. -## What you should know +## Roles and licenses The following roles provide *read-only* access to recommendations: The following roles provide *update and read-only* access to recommendations: - Cloud apps Administrator - Apps Administrator -Any role can enable the Azure AD recommendations preview, but you'll need one of the roles listed above to view or update recommendations. Azure AD only displays the recommendations that apply to your tenant, so you may not see all supported recommendations listed. --Some recommendations have a list of impacted resources associated. This list of resources gives you more context on how the recommendation applies to you and/or which resources you need to address. The only action recorded in the audit log is completing recommendations. Actions taken on a recommendation are collected in the audit log. To view these logs, go to **Azure AD** > **Audit logs** and filter the service to "Azure AD recommendations." +Azure AD recommendations is automatically enabled. If you'd like to disable this feature, go to **Azure AD** > **Preview features**. Locate the **Recommendations** feature, and change the **State**. -The table below provides the impacted resources and links available documentation. +Azure AD only displays the recommendations that apply to your tenant, so you may not see all supported recommendations listed. Some recommendations are available in all tenants, regardless of the license type, but others require the [Workload Identities premium license](../identity-protection/concept-workload-identity-risk.md). -| Recommendation | Impacted resources | -|- |- | -| [Convert per-user MFA to Conditional Access MFA](recommendation-turn-off-per-user-mfa.md) | Users | -| [Integrate 3rd party applications](recommendation-integrate-third-party-apps.md) | Tenant level | -| [Migrate applications from AD FS to Azure AD](recommendation-migrate-apps-from-adfs-to-azure-ad.md) | Users | -| [Migrate to Microsoft Authenticator](recommendation-migrate-to-authenticator.md) | Users | -| [Minimize MFA prompts from known devices](recommendation-migrate-apps-from-adfs-to-azure-ad.md) | Users | +### Recommendations available for all Azure AD tenants -## How to access Azure AD recommendations (preview) +The recommendations listed in the following table are available to all Azure AD tenants. The table provides the impacted resources and links to available documentation. -To enable the Azure AD recommendations preview: +| Recommendation | Impacted resources | Availability | +|- |- |- | +| [Convert per-user MFA to Conditional Access MFA](recommendation-turn-off-per-user-mfa.md) | Users | Generally available | +| [Migrate applications from AD FS to Azure AD](recommendation-migrate-apps-from-adfs-to-azure-ad.md) | Users | Generally available | +| [Migrate to Microsoft Authenticator](recommendation-migrate-to-authenticator.md) | Users | Preview | +| [Minimize MFA prompts from known devices](recommendation-migrate-apps-from-adfs-to-azure-ad.md) | Users | Generally available | -1. Sign in to the [Azure portal](https://portal.azure.com/). +### Recommendations available for Workload Identities premium licenses -1. Go to **Azure AD** > **Preview features** and enable **Azure AD recommendations.** - - Recommendations may take a few minutes to sync. - - While anyone can enable the preview feature, you'll need a [specific role](overview-recommendations.md#what-you-should-know) to view or update a recommendation. +The recommendations listed in the following table are available to Azure AD tenants with a Workload Identities premium license. -  +| Recommendation | Impacted resources | Availability | +|- |- |- | +| Remove unused applications | Applications | Preview | +| Remove unused credentials from applications | Applications | Preview | +| Renew expiring application credentials | Applications | Preview | +| Renew expiring service principal credentials | Applications | Preview | -After the preview is enabled, you can view the available recommendations from the Azure AD administration portal. The Azure AD recommendations feature appears on the **Overview** page of your tenant. --## How to use Azure AD recommendations (preview) +## How to use Azure AD recommendations 1. Go to **Azure AD** > **Recommendations**. After the preview is enabled, you can view the available recommendations from th - Mark a recommendation as **Completed** if all impacted resources have been addressed. - Active resources may still appear in the list of resources for manually completed recommendations. If the resource is completed, the service will update the status the next time the service runs. - If the service identifies an active resource for a manually completed recommendation the next time the service runs, the recommendation will automatically change back to **Active**.+ - Completing a recommendation is the only action collected in the audit log. To view these logs, go to **Azure AD** > **Audit logs** and filter the service to "Azure AD recommendations." - Mark a recommendation as **Dismissed** if you think the recommendation is irrelevant or the data is wrong. - Azure AD will ask for a reason why you dismissed the recommendation so we can improve the service. - Mark a recommendation as **Postponed** if you want to address the recommendation at a later time. After the preview is enabled, you can view the available recommendations from th Continue to monitor the recommendations in your tenant for changes. +### Use Microsoft Graph with Azure Active Directory recommendations ++Azure Active Directory recommendations can be viewed and managed using Microsoft Graph on the `/beta` endpoint. You can view recommendations along with their impacted resources, mark a recommendation as completed by a user, postpone a recommendation for later, and more. ++To get started, follow these instructions to work with recommendations using Microsoft Graph in Graph Explorer. The example uses the Migrate apps from Active Directory Federated Services (ADFS) to Azure AD recommendation. ++1. Sign in to [Graph Explorer](https://aka.ms/ge). +1. Select **GET** as the HTTP method from the dropdown. +1. Set the API version to **beta**. +1. Add the following query to retrieve recommendations, then select the **Run query** button. ++ ```http + GET https://graph.microsoft.com/beta/directory/recommendations + ``` ++1. To view the details of a specific `recommendationType`, use the following API. This example retrieves the detail of the "Migrate apps from AD FS to Azure AD" recommendation. ++ ```http + GET https://graph.microsoft.com/beta/directory/recommendations?$filter=recommendationType eq 'adfsAppsMigration' + ``` ++1. To view the impacted resources for a specific recommendation, expand the `impactedResources` relationship. ++ ```http + GET https://graph.microsoft.com/beta/directory/recommendations?$filter=recommendationType eq 'adfsAppsMigration'&$expand=impactedResources + ``` ++For more information, see the [Microsoft Graph documentation for recommendations](/graph/api/resources/recommendation). + ## Next steps -* [Activity logs in Azure Monitor](concept-activity-logs-azure-monitor.md) -* [Stream logs to event hub](tutorial-azure-monitor-stream-logs-to-event-hub.md) -* [Send logs to Azure Monitor logs](howto-integrate-activity-logs-with-log-analytics.md) +* [Learn more about Microsoft Graph](/graph/overview) +* [Get started with Azure AD reports](overview-reports.md) +* [Learn about Azure AD monitoring](overview-monitoring.md) |
active-directory | Recommendation Integrate Third Party Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-integrate-third-party-apps.md | - Title: Azure Active Directory recommendation - Integrate third party apps with Azure AD | Microsoft Docs -description: Learn why you should integrate third party apps with Azure AD -------- Previously updated : 10/31/2022-------# Azure AD recommendation: Integrate third party apps --[Azure Active Directory (Azure AD) recommendations](overview-recommendations.md) is a feature that provides you with personalized insights and actionable guidance to align your tenant with recommended best practices. --This article covers the recommendation to integrate your third party apps with Azure AD. --## Description --As an Azure AD admin responsible for managing applications, you want to use the Azure AD security features with your third party apps. Integrating these apps into Azure AD enables you to use one unified method to manage access to your third party apps. Your users also benefit from using single sign-on to access all your apps with a single password. --If Azure AD determines that none of your users are using Azure AD to authenticate to your third party apps, this recommendation shows up. --## Value --Integrating third party apps with Azure AD allows you to utilize the core identity and access features provided by Azure AD. Manage access, single sign-on, and other properties. Add an extra security layer by using [Conditional Access](../conditional-access/overview.md) to control how your users can access your apps. --Integrating third party apps with Azure AD: -- Improves the productivity of your users.--- Lowers your app management cost.--## Action plan --1. Review the configuration of your apps. -2. For each app that isn't integrated into Azure AD, verify whether an integration is possible. - --## Next steps --- [Explore tutorials for integrating SaaS applications with Azure AD](../saas-apps/tutorial-list.md) |
active-directory | Recommendation Mfa From Known Devices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-mfa-from-known-devices.md | -This article covers the recommendation to convert minimize multi-factor authentication (MFA) prompts from known devices. -+This article covers the recommendation to minimize multi-factor authentication (MFA) prompts from known devices. This recommendation is called `tenantMFA` in the recommendations API in Microsoft Graph. ## Description The remember multi-factor authentication feature sets a persistent cookie on the  -- For more information, see [Configure Azure AD Multi-Factor Authentication settings](../authentication/howto-mfa-mfasettings.md). |
active-directory | Recommendation Migrate Apps From Adfs To Azure Ad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-migrate-apps-from-adfs-to-azure-ad.md | -This article covers the recommendation to migrate apps from ADFS to Azure Active Directory (Azure AD). -+This article covers the recommendation to migrate apps from Active Directory Federated Services (AD FS) to Azure Active Directory (Azure AD). This recommendation is called `adfsAppsMigration` in the recommendations API in Microsoft Graph. ## Description As an admin responsible for managing applications, I want my applications to use Azure AD’s security features and maximize their value. --- ## Logic If a tenant has apps on AD FS, and any of these apps are deemed 100% migratable, this recommendation shows up. ## Value -Using Azure AD gives you granular per-application access controls to secure access to applications. With Azure AD's B2B collaboration, you can increase user productivity. Automated app provisioning automates the user identity lifecycle in cloud SaaS apps such as Dropbox, Salesforce and more. +Using Azure AD gives you granular per-application access controls to secure access to applications. With Azure AD's B2B collaboration, you can increase user productivity. Automated app provisioning automates the user identity lifecycle in cloud SaaS apps such as Dropbox, Salesforce and more. ## Action plan -1. [Install Azure AD Connect Health](../hybrid/how-to-connect-install-roadmap.md) on your AD FS server. Azure AD Connect Health installation. +1. [Install Azure AD Connect Health](../hybrid/how-to-connect-install-roadmap.md) on your AD FS server. 2. [Review the AD FS application activity report](../manage-apps/migrate-adfs-application-activity.md) to get insights about your AD FS applications. Using Azure AD gives you granular per-application access control 4. Migrate applications to Azure AD. For more information, use [the deployment plan for enabling single sign-on](https://go.microsoft.com/fwlink/?linkid=2110877&clcid=0x409). -- - ## Next steps -- [What is Azure Active Directory recommendations](overview-recommendations.md)--- [Azure AD reports overview](overview-reports.md)+* [What is Azure Active Directory recommendations](overview-recommendations.md) +* [Azure AD reports overview](overview-reports.md) +* [Learn more about Microsoft Graph](/graph/overview) +* [Explore the Microsoft Graph API properties for recommendations](/graph/api/resources/recommendation) |
active-directory | Recommendation Migrate To Authenticator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-migrate-to-authenticator.md | -# Azure AD recommendation: Migrate to Microsoft authenticator +# Azure AD recommendation: Migrate to Microsoft Authenticator [Azure AD recommendations](overview-recommendations.md) is a feature that provides you with personalized insights and actionable guidance to align your tenant with recommended best practices. -This article covers the recommendation to migrate users to authenticator. -+This article covers the recommendation to migrate users to the Microsoft Authenticator app. This recommendation is called `useAuthenticatorApp` in the recommendations API in Microsoft Graph. ## Description Multi-factor authentication (MFA) is a key component to improve the security pos One possibility to accomplish this goal is to migrate users using SMS or voice call for MFA to use the Microsoft authenticator app. - ## Logic -If Azure AD detects that your tenant has users authenticating using SMS or voice in the past week instead of the authenticator app, this recommendation shows up. +This recommendation appears if Azure AD detects that your tenant has users authenticating using SMS or voice instead of the Microsoft Authenticator app in the past week. ## Value -- Push notifications through the Microsoft authenticator app provide the least intrusive MFA experience for users. This is the most reliable and secure option because it relies on a data connection rather than telephony.-- Verification code option using Microsoft authenticator app enables MFA even in isolated environments without data or cellular signals where SMS and Voice calls would not work.-- The Microsoft authenticator app is available for Android and iOS.-- Pathway to passwordless: Authenticator can be a traditional MFA factor (one-time passcodes, push notification) and when your organization is ready for Password-less, the authenticator app can be used sign-into Azure AD without a password.+Push notifications through the Microsoft Authenticator app provide the least intrusive MFA experience for users. This method is the most reliable and secure option because it relies on a data connection rather than telephony. ++The verification code option enables MFA even in isolated environments without data or cellular signals, where SMS and Voice calls may not work. ++The Microsoft Authenticator app is available for Android and iOS. Microsoft Authenticator can serve as a traditional MFA factor (one-time passcodes, push notification) and when your organization is ready for Password-less, the Microsoft Authenticator app can be used to sign in to Azure AD without a password. ## Action plan If Azure AD detects that your tenant has users authenticating using SMS or voice 2. Educate users on how to add a work or school account. --- - ## Next steps -- [Tutorials for integrating SaaS applications with Azure Active Directory](../saas-apps/tutorial-list.md)-- [Azure AD reports overview](overview-reports.md)+* [Learn more about Microsoft Graph](/graph/overview) +* [Explore the Microsoft Graph API properties for recommendations](/graph/api/resources/recommendation) +* [Azure AD reports overview](overview-reports.md) |
active-directory | Recommendation Turn Off Per User Mfa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-turn-off-per-user-mfa.md | -# Azure AD recommendation: Convert per-user MFA to Conditional Access MFA +# Azure AD recommendation: Switch from per-user MFA to Conditional Access MFA [Azure AD recommendations](overview-recommendations.md) is a feature that provides you with personalized insights and actionable guidance to align your tenant with recommended best practices. -This article covers the recommendation to convert per-user Multi-factor authentication (MFA) accounts to Conditional Access (CA) MFA accounts. +This article covers the recommendation to switch per-user Multi-factor authentication (MFA) accounts to Conditional Access (CA) MFA accounts. This recommendation is called `switchFromPerUserMFA` in the recommendations API in Microsoft Graph. ## Description As an admin, you want to maintain security for your companyΓÇÖs resources, but you also want your employees to easily access resources as needed. MFA enables you to enhance the security posture of your tenant. -In your tenant, you can enable MFA on a per-user basis. In this scenario, your users perform MFA each time they sign in, with some exceptions, such as when they sign in from trusted IP addresses or when the remember MFA on trusted devices feature is turned on. While enabling MFA is a good practice, converting per-user MFA to MFA based on [Conditional Access](../conditional-access/overview.md) can reduce the number of times your users are prompted for MFA. +In your tenant, you can enable MFA on a per-user basis. In this scenario, your users perform MFA each time they sign in, with some exceptions, such as when they sign in from trusted IP addresses or when the remember MFA on trusted devices feature is turned on. While enabling MFA is a good practice, switching per-user MFA to MFA based on [Conditional Access](../conditional-access/overview.md) can reduce the number of times your users are prompted for MFA. This recommendation shows up if: After all users have been migrated to CA MFA accounts, the recommendation status ## Next steps -- [Learn about requiring MFA for all users using Conditional Access](../conditional-access/howto-conditional-access-policy-all-users-mfa.md)-- [View the MFA CA policy tutorial](../authentication/tutorial-enable-azure-mfa.md)+* [Learn about requiring MFA for all users using Conditional Access](../conditional-access/howto-conditional-access-policy-all-users-mfa.md) +* [View the MFA CA policy tutorial](../authentication/tutorial-enable-azure-mfa.md) +* [Learn more about Microsoft Graph](/graph/overview) +* [Explore the Microsoft Graph API properties for recommendations](/graph/api/resources/recommendation) |
active-directory | Permissions Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md | Users in this role can create application registrations when the "Users can regi Users in this role can create attack payloads but not actually launch or schedule them. Attack payloads are then available to all administrators in the tenant who can use them to create a simulation. +For more information, see [Microsoft Defender for Office 365 permissions in the Microsoft 365 Defender portal](/microsoft-365/security/office-365-security/mdo-portal-permissions) and [Permissions in the Microsoft Purview compliance portal](/microsoft-365/compliance/microsoft-365-compliance-center-permissions). + > [!div class="mx-tableFixed"] > | Actions | Description | > | | | Users in this role can create attack payloads but not actually launch or schedul Users in this role can create and manage all aspects of attack simulation creation, launch/scheduling of a simulation, and the review of simulation results. Members of this role have this access for all simulations in the tenant. +For more information, see [Microsoft Defender for Office 365 permissions in the Microsoft 365 Defender portal](/microsoft-365/security/office-365-security/mdo-portal-permissions) and [Permissions in the Microsoft Purview compliance portal](/microsoft-365/compliance/microsoft-365-compliance-center-permissions). + > [!div class="mx-tableFixed"] > | Actions | Description | > | | | Users with this role **cannot** do the following: >* Application Registration and Enterprise Application owners, who can manage credentials of apps they own. Those apps may have privileged permissions in Azure AD and elsewhere not granted to Authentication Administrators. Through this path an Authentication Administrator can assume the identity of an application owner and then further assume the identity of a privileged application by updating the credentials for the application. >* Azure subscription owners, who may have access to sensitive or private information or critical configuration in Azure. >* Security Group and Microsoft 365 group owners, who can manage group membership. Those groups may grant access to sensitive or private information or critical configuration in Azure AD and elsewhere.->* Administrators in other services outside of Azure AD like Exchange Online, Office 365 Security & Compliance Center, and human resources systems. +>* Administrators in other services outside of Azure AD like Exchange Online, Microsoft 365 Defender portal, Microsoft Purview compliance portal, and human resources systems. >* Non-administrators like executives, legal counsel, and human resources employees who may have access to sensitive or private information. > [!div class="mx-tableFixed"] Users with this role can manage all enterprise Azure DevOps policies, applicable ## Azure Information Protection Administrator -Users with this role have all permissions in the Azure Information Protection service. This role allows configuring labels for the Azure Information Protection policy, managing protection templates, and activating protection. This role does not grant any permissions in Identity Protection Center, Privileged Identity Management, Monitor Microsoft 365 Service Health, or Office 365 Security & Compliance Center. +Users with this role have all permissions in the Azure Information Protection service. This role allows configuring labels for the Azure Information Protection policy, managing protection templates, and activating protection. This role does not grant any permissions in Identity Protection Center, Privileged Identity Management, Monitor Microsoft 365 Service Health, Microsoft 365 Defender portal, or Microsoft Purview compliance portal. > [!div class="mx-tableFixed"] > | Actions | Description | Users in this role can enable, disable, and delete devices in Azure AD and read ## Compliance Administrator -Users with this role have permissions to manage compliance-related features in the Microsoft Purview compliance portal, Microsoft 365 admin center, Azure, and Office 365 Security & Compliance Center. Assignees can also manage all features within the Exchange admin center and create support tickets for Azure and Microsoft 365. More information is available at [About Microsoft 365 admin roles](https://support.office.com/article/About-Office-365-admin-roles-da585eea-f576-4f55-a1e0-87090b6aaa9d). +Users with this role have permissions to manage compliance-related features in the Microsoft Purview compliance portal, Microsoft 365 admin center, Azure, and Microsoft 365 Defender portal. Assignees can also manage all features within the Exchange admin center and create support tickets for Azure and Microsoft 365. For more information, see [Roles and role groups in Microsoft Defender for Office 365 and Microsoft Purview compliance](/microsoft-365/security/office-365-security/scc-permissions). In | Can do -- | --[Microsoft Purview compliance portal](https://protection.office.com) | Protect and manage your organization's data across Microsoft 365 services<br>Manage compliance alerts -[Compliance Manager](/office365/securitycompliance/meet-data-protection-and-regulatory-reqs-using-microsoft-cloud) | Track, assign, and verify your organization's regulatory compliance activities -[Office 365 Security & Compliance Center](https://support.office.com/article/About-Office-365-admin-roles-da585eea-f576-4f55-a1e0-87090b6aaa9d) | Manage data governance<br>Perform legal and data investigation<br>Manage Data Subject Request<br><br>This role has the same permissions as the [Compliance Administrator RoleGroup](/microsoft-365/security/office-365-security/permissions-in-the-security-and-compliance-center#permissions-needed-to-use-features-in-the-security--compliance-center) in Office 365 Security & Compliance Center role-based access control. +[Microsoft Purview compliance portal](/microsoft-365/compliance/microsoft-365-compliance-center) | Protect and manage your organization's data across Microsoft 365 services<br>Manage compliance alerts +[Microsoft Purview Compliance Manager](/microsoft-365/compliance/compliance-manager) | Track, assign, and verify your organization's regulatory compliance activities +[Microsoft 365 Defender portal](/microsoft-365/security/defender/microsoft-365-defender-portal) | Manage data governance<br>Perform legal and data investigation<br>Manage Data Subject Request<br><br>This role has the same permissions as the [Compliance Administrator role group](/microsoft-365/security/office-365-security/scc-permissions) in Microsoft 365 Defender portal role-based access control. [Intune](/intune/role-based-access-control) | View all Intune audit data [Microsoft Defender for Cloud Apps](/defender-cloud-apps/manage-admins) | Has read-only permissions and can manage alerts<br>Can create and modify file policies and allow file governance actions<br>Can view all the built-in reports under Data Management In | Can do ## Compliance Data Administrator -Users with this role have permissions to track data in the Microsoft Purview compliance portal, Microsoft 365 admin center, and Azure. Users can also track compliance data within the Exchange admin center, Compliance Manager, and Teams & Skype for Business admin center and create support tickets for Azure and Microsoft 365. [This documentation](/microsoft-365/security/office-365-security/permissions-in-the-security-and-compliance-center#permissions-needed-to-use-features-in-the-security--compliance-center) has details on differences between Compliance Administrator and Compliance Data Administrator. +Users with this role have permissions to track data in the Microsoft Purview compliance portal, Microsoft 365 admin center, and Azure. Users can also track compliance data within the Exchange admin center, Compliance Manager, and Teams & Skype for Business admin center and create support tickets for Azure and Microsoft 365. For more information about the differences between Compliance Administrator and Compliance Data Administrator, see [Roles and role groups in Microsoft Defender for Office 365 and Microsoft Purview compliance](/microsoft-365/security/office-365-security/scc-permissions). In | Can do -- | --[Microsoft Purview compliance portal](https://protection.office.com) | Monitor compliance-related policies across Microsoft 365 services<br>Manage compliance alerts -[Compliance Manager](/office365/securitycompliance/meet-data-protection-and-regulatory-reqs-using-microsoft-cloud) | Track, assign, and verify your organization's regulatory compliance activities -[Office 365 Security & Compliance Center](https://support.office.com/article/About-Office-365-admin-roles-da585eea-f576-4f55-a1e0-87090b6aaa9d) | Manage data governance<br>Perform legal and data investigation<br>Manage Data Subject Request<br><br>This role has the same permissions as the [Compliance Data Administrator RoleGroup](/microsoft-365/security/office-365-security/permissions-in-the-security-and-compliance-center#permissions-needed-to-use-features-in-the-security--compliance-center) in Office 365 Security & Compliance Center role-based access control. +[Microsoft Purview compliance portal](/microsoft-365/compliance/microsoft-365-compliance-center) | Monitor compliance-related policies across Microsoft 365 services<br>Manage compliance alerts +[Microsoft Purview Compliance Manager](/microsoft-365/compliance/compliance-manager) | Track, assign, and verify your organization's regulatory compliance activities +[Microsoft 365 Defender portal](/microsoft-365/security/defender/microsoft-365-defender-portal) | Manage data governance<br>Perform legal and data investigation<br>Manage Data Subject Request<br><br>This role has the same permissions as the [Compliance Data Administrator role group](/microsoft-365/security/office-365-security/scc-permissions) in Microsoft 365 Defender portal role-based access control. [Intune](/intune/role-based-access-control) | View all Intune audit data [Microsoft Defender for Cloud Apps](/defender-cloud-apps/manage-admins) | Has read-only permissions and can manage alerts<br>Can create and modify file policies and allow file governance actions<br>Can view all the built-in reports under Data Management Users with this role have access to all administrative features in Azure Active ## Global Reader -Users in this role can read settings and administrative information across Microsoft 365 services but can't take management actions. Global Reader is the read-only counterpart to Global Administrator. Assign Global Reader instead of Global Administrator for planning, audits, or investigations. Use Global Reader in combination with other limited admin roles like Exchange Administrator to make it easier to get work done without the assigning the Global Administrator role. Global Reader works with Microsoft 365 admin center, Exchange admin center, SharePoint admin center, Teams admin center, Security center, Compliance center, Azure AD admin center, and Device Management admin center. +Users in this role can read settings and administrative information across Microsoft 365 services but can't take management actions. Global Reader is the read-only counterpart to Global Administrator. Assign Global Reader instead of Global Administrator for planning, audits, or investigations. Use Global Reader in combination with other limited admin roles like Exchange Administrator to make it easier to get work done without the assigning the Global Administrator role. Global Reader works with Microsoft 365 admin center, Exchange admin center, SharePoint admin center, Teams admin center, Microsoft 365 Defender portal, Microsoft Purview compliance portal, Azure AD admin center, and Device Management admin center. Users with this role **cannot** do the following: Users with this role **cannot** do the following: > >- OneDrive admin center - OneDrive admin center does not support the Global Reader role >- [Microsoft 365 admin center](/microsoft-365/admin/admin-overview/admin-center-overview) - Global Reader can't read integrated apps. You won't find the **Integrated apps** tab under **Settings** in the left pane of Microsoft 365 admin center.->- [Office Security & Compliance Center](https://sip.protection.office.com/homepage) - Global Reader can't read SCC audit logs, do content search, or see Secure Score. +>- [Microsoft 365 Defender portal](/microsoft-365/security/defender/microsoft-365-defender-portal) - Global Reader can't read SCC audit logs, do content search, or see Secure Score. >- [Teams admin center](/microsoftteams/manage-teams-in-modern-portal) - Global Reader cannot read **Teams lifecycle**, **Analytics & reports**, **IP phone device management**, and **App catalog**. For more information, see [Use Microsoft Teams administrator roles to manage Teams](/microsoftteams/using-admin-roles). >- [Privileged Access Management](/microsoft-365/compliance/privileged-access-management) doesn't support the Global Reader role. >- [Azure Information Protection](/azure/information-protection/what-is-information-protection) - Global Reader is supported [for central reporting](/azure/information-protection/reports-aip) only, and when your Azure AD organization isn't on the [unified labeling platform](/azure/information-protection/faqs#how-can-i-determine-if-my-tenant-is-on-the-unified-labeling-platform). Users with this role **cannot** do the following: >- Application Registration and Enterprise Application owners, who can manage credentials of apps they own. Those apps may have privileged permissions in Azure AD and elsewhere not granted to Helpdesk Administrators. Through this path a Helpdesk Administrator may be able to assume the identity of an application owner and then further assume the identity of a privileged application by updating the credentials for the application. >- Azure subscription owners, who might have access to sensitive or private information or critical configuration in Azure. >- Security Group and Microsoft 365 group owners, who can manage group membership. Those groups may grant access to sensitive or private information or critical configuration in Azure AD and elsewhere.->- Administrators in other services outside of Azure AD like Exchange Online, Office Security and Compliance Center, and human resources systems. +>- Administrators in other services outside of Azure AD like Exchange Online, Microsoft 365 Defender portal, Microsoft Purview compliance portal, and human resources systems. >- Non-administrators like executives, legal counsel, and human resources employees who may have access to sensitive or private information. Delegating administrative permissions over subsets of users and applying policies to a subset of users is possible with [Administrative Units](administrative-units.md). Users with this role **cannot** do the following: >* Application Registration and Enterprise Application owners, who can manage credentials of apps they own. Those apps may have privileged permissions in Azure AD and elsewhere not granted to Authentication Administrators. Through this path an Authentication Administrator can assume the identity of an application owner and then further assume the identity of a privileged application by updating the credentials for the application. >* Azure subscription owners, who may have access to sensitive or private information or critical configuration in Azure. >* Security Group and Microsoft 365 group owners, who can manage group membership. Those groups may grant access to sensitive or private information or critical configuration in Azure AD and elsewhere.->* Administrators in other services outside of Azure AD like Exchange Online, Office Security and Compliance Center, and human resources systems. +>* Administrators in other services outside of Azure AD like Exchange Online, Microsoft 365 Defender portal, and Microsoft Purview compliance portal, and human resources systems. >* Non-administrators like executives, legal counsel, and human resources employees who may have access to sensitive or private information. > [!div class="mx-tableFixed"] Users in this role can create, manage, and delete content for Microsoft Search i ## Security Administrator -Users with this role have permissions to manage security-related features in the Microsoft 365 Defender portal, Azure Active Directory Identity Protection, Azure Active Directory Authentication, Azure Information Protection, and Office 365 Security & Compliance Center. More information about Office 365 permissions is available at [Permissions in the Security & Compliance Center](https://support.office.com/article/Permissions-in-the-Office-365-Security-Compliance-Center-d10608af-7934-490a-818e-e68f17d0e9c1). +Users with this role have permissions to manage security-related features in the Microsoft 365 Defender portal, Azure Active Directory Identity Protection, Azure Active Directory Authentication, Azure Information Protection, and Microsoft Purview compliance portal. For more information about Office 365 permissions, see [Roles and role groups in Microsoft Defender for Office 365 and Microsoft Purview compliance](/microsoft-365/security/office-365-security/scc-permissions). In | Can do | -[Microsoft 365 security center](https://protection.office.com) | Monitor security-related policies across Microsoft 365 services<br>Manage security threats and alerts<br>View reports +[Microsoft 365 Defender portal](/microsoft-365/security/defender/microsoft-365-defender-portal) | Monitor security-related policies across Microsoft 365 services<br>Manage security threats and alerts<br>View reports Identity Protection Center | All permissions of the Security Reader role<br>Additionally, the ability to perform all Identity Protection Center operations except for resetting passwords [Privileged Identity Management](../privileged-identity-management/pim-configure.md) | All permissions of the Security Reader role<br>**Cannot** manage Azure AD role assignments or settings-[Office 365 Security & Compliance Center](https://support.office.com/article/About-Office-365-admin-roles-da585eea-f576-4f55-a1e0-87090b6aaa9d) | Manage security policies<br>View, investigate, and respond to security threats<br>View reports +[Microsoft Purview compliance portal](/microsoft-365/compliance/microsoft-365-compliance-center) | Manage security policies<br>View, investigate, and respond to security threats<br>View reports Azure Advanced Threat Protection | Monitor and respond to suspicious security activity [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/prepare-deployment) | Assign roles<br>Manage machine groups<br>Configure endpoint threat detection and automated remediation<br>View, investigate, and respond to alerts<br/>View machines/device inventory [Intune](/intune/role-based-access-control) | Views user, device, enrollment, configuration, and application information<br>Cannot make changes to Intune Azure Advanced Threat Protection | Monitor and respond to suspicious security ac ## Security Operator -Users with this role can manage alerts and have global read-only access on security-related features, including all information in Microsoft 365 security center, Azure Active Directory, Identity Protection, Privileged Identity Management and Office 365 Security & Compliance Center. More information about Office 365 permissions is available at [Permissions in the Security & Compliance Center](/office365/securitycompliance/permissions-in-the-security-and-compliance-center). +Users with this role can manage alerts and have global read-only access on security-related features, including all information in Microsoft 365 Defender portal, Azure Active Directory, Identity Protection, Privileged Identity Management and Microsoft Purview compliance portal. For more information about Office 365 permissions, see [Roles and role groups in Microsoft Defender for Office 365 and Microsoft Purview compliance](/microsoft-365/security/office-365-security/scc-permissions). | In | Can do | | | |-| [Microsoft 365 security center](https://protection.office.com) | All permissions of the Security Reader role<br/>View, investigate, and respond to security threats alerts<br/>Manage security settings in security center | +| [Microsoft 365 Defender portal](/microsoft-365/security/defender/microsoft-365-defender-portal) | All permissions of the Security Reader role<br/>View, investigate, and respond to security threats alerts<br/>Manage security settings in Microsoft 365 Defender portal | | [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md) | All permissions of the Security Reader role<br>Additionally, the ability to perform all Identity Protection Center operations except for resetting passwords and configuring alert e-mails. | | [Privileged Identity Management](../privileged-identity-management/pim-configure.md) | All permissions of the Security Reader role |-| [Office 365 Security & Compliance Center](https://support.office.com/article/About-Office-365-admin-roles-da585eea-f576-4f55-a1e0-87090b6aaa9d) | All permissions of the Security Reader role<br>View, investigate, and respond to security alerts | +| [Microsoft Purview compliance portal](/microsoft-365/compliance/microsoft-365-compliance-center) | All permissions of the Security Reader role<br>View, investigate, and respond to security alerts | | [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/prepare-deployment) | All permissions of the Security Reader role<br/>View, investigate, and respond to security alerts<br/>When you turn on role-based access control in Microsoft Defender for Endpoint, users with read-only permissions such as the Security Reader role lose access until they are assigned a Microsoft Defender for Endpoint role. | | [Intune](/intune/role-based-access-control) | All permissions of the Security Reader role | | [Microsoft Defender for Cloud Apps](/defender-cloud-apps/manage-admins) | All permissions of the Security Reader role<br>View, investigate, and respond to security alerts | Users with this role can manage alerts and have global read-only access on secur ## Security Reader -Users with this role have global read-only access on security-related feature, including all information in Microsoft 365 security center, Azure Active Directory, Identity Protection, Privileged Identity Management, as well as the ability to read Azure Active Directory sign-in reports and audit logs, and in Office 365 Security & Compliance Center. More information about Office 365 permissions is available at [Permissions in the Security & Compliance Center](https://support.office.com/article/Permissions-in-the-Office-365-Security-Compliance-Center-d10608af-7934-490a-818e-e68f17d0e9c1). +Users with this role have global read-only access on security-related feature, including all information in Microsoft 365 Defender portal, Azure Active Directory, Identity Protection, Privileged Identity Management, as well as the ability to read Azure Active Directory sign-in reports and audit logs, and in Microsoft Purview compliance portal. For more information about Office 365 permissions, see [Roles and role groups in Microsoft Defender for Office 365 and Microsoft Purview compliance](/microsoft-365/security/office-365-security/scc-permissions). In | Can do | -[Microsoft 365 security center](https://protection.office.com) | View security-related policies across Microsoft 365 services<br>View security threats and alerts<br>View reports +[Microsoft 365 Defender portal](/microsoft-365/security/defender/microsoft-365-defender-portal) | View security-related policies across Microsoft 365 services<br>View security threats and alerts<br>View reports Identity Protection Center | Read all security reports and settings information for security features<br><ul><li>Anti-spam<li>Encryption<li>Data loss prevention<li>Anti-malware<li>Advanced threat protection<li>Anti-phishing<li>Mail flow rules [Privileged Identity Management](../privileged-identity-management/pim-configure.md) | Has read-only access to all information surfaced in Azure AD Privileged Identity Management: Policies and reports for Azure AD role assignments and security reviews.<br>**Cannot** sign up for Azure AD Privileged Identity Management or make any changes to it. In the Privileged Identity Management portal or via PowerShell, someone in this role can activate additional roles (for example, Global Administrator or Privileged Role Administrator), if the user is eligible for them.-[Office 365 Security & Compliance Center](https://support.office.com/article/About-Office-365-admin-roles-da585eea-f576-4f55-a1e0-87090b6aaa9d) | View security policies<br>View and investigate security threats<br>View reports +[Microsoft Purview compliance portal](/microsoft-365/compliance/microsoft-365-compliance-center) | View security policies<br>View and investigate security threats<br>View reports [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/prepare-deployment) | View and investigate alerts<br/>When you turn on role-based access control in Microsoft Defender for Endpoint, users with read-only permissions such as the Security Reader role lose access until they are assigned a Microsoft Defender for Endpoint role. [Intune](/intune/role-based-access-control) | Views user, device, enrollment, configuration, and application information. Cannot make changes to Intune. [Microsoft Defender for Cloud Apps](/defender-cloud-apps/manage-admins) | Has read permissions. Users with this role **cannot** do the following: >- Application Registration and Enterprise Application owners, who can manage credentials of apps they own. Those apps may have privileged permissions in Azure AD and elsewhere not granted to User Administrators. Through this path a User Administrator may be able to assume the identity of an application owner and then further assume the identity of a privileged application by updating the credentials for the application. >- Azure subscription owners, who may have access to sensitive or private information or critical configuration in Azure. >- Security Group and Microsoft 365 group owners, who can manage group membership. Those groups may grant access to sensitive or private information or critical configuration in Azure AD and elsewhere.->- Administrators in other services outside of Azure AD like Exchange Online, Office Security and Compliance Center, and human resources systems. +>- Administrators in other services outside of Azure AD like Exchange Online, Microsoft 365 Defender portal, Microsoft Purview compliance portal, and human resources systems. >- Non-administrators like executives, legal counsel, and human resources employees who may have access to sensitive or private information. > [!div class="mx-tableFixed"] |
active-directory | Alertops Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/alertops-tutorial.md | Title: 'Tutorial: Azure Active Directory integration with AlertOps | Microsoft Docs' + Title: 'Tutorial: Azure AD SSO integration with AlertOps' description: Learn how to configure single sign-on between Azure Active Directory and AlertOps. -# Tutorial: Integrate AlertOps with Azure Active Directory +# Tutorial: Azure AD SSO integration with AlertOps In this tutorial, you'll learn how to integrate AlertOps with Azure Active Directory (Azure AD). When you integrate AlertOps with Azure AD, you can: Follow these steps to enable Azure AD SSO in the Azure portal. 1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps: 1. In the **Identifier** text box, type a URL using the following pattern:- `https://<SUBDOMAIN>.alertops.com` + `https://app.alertops.com/<SUBDOMAIN>` 1. In the **Reply URL** text box, type a URL using the following pattern:- `https://<SUBDOMAIN>.alertops.com/login.aspx` + `https://api.alertops.com/api/v2/saml/<SUBDOMAIN>` -1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode: -- In the **Sign-on URL** text box, type a URL using the following pattern: - `https://<SUBDOMAIN>.alertops.com/login.aspx` + 1. In the **Logout Url (Optional)** text box, type a URL using the following pattern: + `https://app.alertops.com/<SUBDOMAIN>` > [!NOTE]- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [AlertOps Client support team](mailto:support@alertops.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. + > These values are not real. Update these values with the actual Identifier, Reply URL and Logout Url. Contact [AlertOps Client support team](mailto:support@alertops.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. 1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. In this section, you'll enable Britta Simon to use Azure single sign-on by grant 3. If you want to setup AlertOps manually, open a new web browser window and sign into your AlertOps company site as an administrator and perform the following steps: -4. Click on the **Account settings** from the left navigation panel. +4. Click on the **Account settings** from the user profile.  -5. On the **Subscription Settings** page select **SSO** and perform the following steps: +5. On the **Account Settings** page, click **Update SSO** and select **Use single sign-on (SSO)** -  +  - a. Select **Use Single Sign-On(SSO)** checkbox. +1. In **SSO** section, perform the following steps: - b. Select **Azure Active Directory** as an **SSO Provider** from the dropdown. +  - c. In the **Issuer URL** textbox, use the identifier value, which you have used in the **Basic SAML Configuration** section in the Azure portal. + a. In the **Issuer URL** textbox, use the identifier value, which you have used in the **Basic SAML Configuration** section in the Azure portal. - d. In the **SAML endpoint URL** textbox, paste the **Login URL** value, which you have copied from the Azure portal. + b. In the **SAML endpoint URL** textbox, paste the **Login URL** value, which you have copied from the Azure portal. - e. In the **SLO endpoint URL** textbox, paste the **Login URL** value, which you have copied from the Azure portal. + c. In the **SLO endpoint URL** textbox, paste the **Login URL** value, which you have copied from the Azure portal. - f. Select **SHA256** as a **SAML Signature Algorithm** from the dropdown. + d. Select **SHA256** as a **SAML Signature Algorithm** from the dropdown. - g. Open your downloaded Certificate(Base64) file in Notepad. Copy the content of it into your clipboard, and then paste it to the X.509 Certificate text box. + e. Open your downloaded **Certificate(Base64)** file in Notepad. Copy the content of it into your clipboard, and then paste it to the **X.509 Certificate** text box. ++ f. Enable **Allow username/password login**. ### Create AlertOps test user 1. In a different browser window, sign in to your AlertOps company site as administrator. -2. Click on the **Users** from the left navigation panel. +2. Click on the **Configuration** and then **Users** from navigation panel.  In this section, you'll enable Britta Simon to use Azure single sign-on by grant  - a. In the **Login User Name** textbox, enter the user name of the user like **Brittasimon**. -- b. In the **Official Email** textbox, enter the email address of the user like **Brittasimon\@contoso.com**. + a. In the **User Name** textbox, enter the user name of the user like **Brittasimon**. - c. In the **First Name** textbox, enter the first name of user like **Britta**. + b. In the **First Name** textbox, enter the first name of user like **Britta**. - d. In the **Last Name** textbox, enter the first name of user like **Simon**. + c. In the **Last Name** textbox, enter the first name of user like **Simon**. - e. Select the **Type** value from the dropdown as per your organization. + d. In the **Email** textbox, enter the email address of the user like `Brittasimon@contoso.com`. - f. Select the **Role** of the user from the dropdown as per your organization. + f. Select the **User Role** of the user from the dropdown as per your organization. - g. Select **Add**. + g. Select **Submit**. ## Test SSO |
active-directory | Oracle Access Manager For Oracle Ebs Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oracle-access-manager-for-oracle-ebs-tutorial.md | + + Title: Azure Active Directory SSO integration with Oracle Access Manager for Oracle E-Business Suite +description: Learn how to configure single sign-on between Azure Active Directory and Oracle Access Manager for Oracle E-Business Suite. ++++++++ Last updated : 02/07/2023+++++# Azure Active Directory SSO integration with Oracle Access Manager for Oracle E-Business Suite ++In this article, you'll learn how to integrate Oracle Access Manager for Oracle E-Business Suite with Azure Active Directory (Azure AD). When you integrate Oracle Access Manager for Oracle E-Business Suite with Azure AD, you can: ++* Control in Azure AD who has access to Oracle Access Manager for Oracle E-Business Suite. +* Enable your users to be automatically signed-in to Oracle Access Manager for Oracle E-Business Suite with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++You'll configure and test Azure AD single sign-on for Oracle Access Manager for Oracle E-Business Suite in a test environment. Oracle Access Manager for Oracle E-Business Suite supports only **SP** initiated single sign-on. ++> [!NOTE] +> Identifier of this application is a fixed string value so only one instance can be configured in one tenant. ++## Prerequisites ++To integrate Azure Active Directory with Oracle Access Manager for Oracle E-Business Suite, you need: ++* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Oracle Access Manager for Oracle E-Business Suite single sign-on (SSO) enabled subscription. ++## Add application and assign a test user ++Before you begin the process of configuring single sign-on, you need to add the Oracle Access Manager for Oracle E-Business Suite application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. ++### Add Oracle Access Manager for Oracle E-Business Suite from the Azure AD gallery ++Add Oracle Access Manager for Oracle E-Business Suite from the Azure AD application gallery to configure single sign-on with Oracle Access Manager for Oracle E-Business Suite. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ++### Create and assign Azure AD test user ++Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure Azure AD SSO ++Complete the following steps to enable Azure AD single sign-on in the Azure portal. ++1. In the Azure portal, on the **Oracle Access Manager for Oracle E-Business Suite** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type a URL using the following pattern: ` https://<SUBDOMAIN>.oraclecloud.com/` ++ b. In the **Reply URL** textbox, type a URL using the following pattern: `https://<SUBDOMAIN>.oraclecloud.com/v1/saml/<UNIQUEID>>` ++ c. In the **Sign on URL** textbox, type a URL using the following pattern: + ` https://<SUBDOMAIN>.oraclecloud.com/` ++1. Your Oracle Access Manager for Oracle E-Business Suite application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows an example for this. The default value of **Unique User Identifier** is **user.userprincipalname** but Oracle Access Manager for Oracle E-Business Suite expects this to be mapped with the user's email address. For that you can use **user.mail** attribute from the list or use the appropriate attribute value based on your organization configuration. ++  ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. ++  ++## Configure Oracle Access Manager for Oracle E-Business Suite SSO ++1. Sign to the Oracle Access Manager console as an Administrator. +1. Click the **Federation** tab at the top of the console. +1. In the **Federation** area of the **Launch Pad** tab, click **Service Provider Management**. +1. On the Service Provider Administration tab, click **Create Identity Provider Partner**. +1. In the **General** area, enter a name for the **Identity Provider partner** and select both **Enable Partner and Default Identity Provider Partner**. Go to the next step before saving. +1. In the **Service Information** area: ++ a. Select **SAML2.0** as the protocol. ++ b. Select **Load from provider metadata**. ++ c. Click **Browse** (for Windows) or **Choose File** (for Mac) and select the **Federation Metadata XML** file that you downloaded from Azure portal. ++ d. Go to the next step before saving. ++1. In the **Mapping Options** area: ++ a. Select the **User Identity Store** option that will be used as the Oracle Access Manager LDAP identity store that is checked for E-Business Suite users. Typically, this is already configured as the Oracle Access Manager identity store. ++ b. Leave **User Search Base DN** blank. The search base is automatically picked from the identity store configuration. ++ c. Select **Map assertion Name ID to User ID Store attribute** and enter mail in the text box. ++1. Click **Save** to save the identity provider partner. +1. After the partner is saved, come back to the **Advanced** area at the bottom of the tab. Ensure that the options are configured as follows: ++ a. **Enable global logout** is selected. ++ b. **HTTP POST SSO** Response Binding is selected. ++### Create Oracle Access Manager for Oracle E-Business Suite test user ++In this section, you create a user called Britta Simon at Oracle Access Manager for Oracle E-Business Suite. Work with [Oracle Access Manager for Oracle E-Business Suite support team](https://www.oracle.com/support/advanced-customer-support/products/cloud.html) to add the users in the Oracle Access Manager for Oracle E-Business Suite platform. Users must be created and activated before you use single sign-on. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++* Click on **Test this application** in Azure portal. This will redirect to Oracle Access Manager for Oracle E-Business Suite Sign-on URL where you can initiate the login flow. ++* Go to Oracle Access Manager for Oracle E-Business Suite Sign-on URL directly and initiate the login flow from there. ++* You can use Microsoft My Apps. When you select the Oracle Access Manager for Oracle E-Business Suite tile in the My Apps, this will redirect to Oracle Access Manager for Oracle E-Business Suite Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Additional resources ++* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). ++## Next steps ++Once you configure Oracle Access Manager for Oracle E-Business Suite you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
active-directory | Oracle Access Manager For Oracle Retail Merchandising Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oracle-access-manager-for-oracle-retail-merchandising-tutorial.md | + + Title: Azure Active Directory SSO integration with Oracle Access Manager for Oracle Retail Merchandising +description: Learn how to configure single sign-on between Azure Active Directory and Oracle Access Manager for Oracle Retail Merchandising. ++++++++ Last updated : 02/07/2023+++++# Azure Active Directory SSO integration with Oracle Access Manager for Oracle Retail Merchandising ++In this article, you'll learn how to integrate Oracle Access Manager for Oracle Retail Merchandising with Azure Active Directory (Azure AD). When you integrate Oracle Access Manager for Oracle Retail Merchandising with Azure AD, you can: ++* Control in Azure AD who has access to Oracle Access Manager for Oracle Retail Merchandising. +* Enable your users to be automatically signed-in to Oracle Access Manager for Oracle Retail Merchandising with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++You'll configure and test Azure AD single sign-on for Oracle Access Manager for Oracle Retail Merchandising in a test environment. Oracle Access Manager for Oracle Retail Merchandising supports only **SP** initiated single sign-on. ++> [!NOTE] +> Identifier of this application is a fixed string value so only one instance can be configured in one tenant. ++## Prerequisites ++To integrate Azure Active Directory with Oracle Access Manager for Oracle Retail Merchandising, you need: ++* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Oracle Access Manager for Oracle Retail Merchandising single sign-on (SSO) enabled subscription. ++## Add application and assign a test user ++Before you begin the process of configuring single sign-on, you need to add the Oracle Access Manager for Oracle Retail Merchandising application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. ++### Add Oracle Access Manager for Oracle Retail Merchandising from the Azure AD gallery ++Add Oracle Access Manager for Oracle Retail Merchandising from the Azure AD application gallery to configure single sign-on with Oracle Access Manager for Oracle Retail Merchandising. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ++### Create and assign Azure AD test user ++Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure Azure AD SSO ++Complete the following steps to enable Azure AD single sign-on in the Azure portal. ++1. In the Azure portal, on the **Oracle Access Manager for Oracle Retail Merchandising** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type a URL using the following pattern: ` https://<SUBDOMAIN>.oraclecloud.com/` ++ b. In the **Reply URL** textbox, type a URL using the following pattern: `https://<SUBDOMAIN>.oraclecloud.com/v1/saml/<UNIQUEID>` ++ c. In the **Sign on URL** textbox, type a URL using the following pattern: + ` https://<SUBDOMAIN>.oraclecloud.com/` + + >[!NOTE] + > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Oracle Access Manager for Oracle Retail Merchandising support team](https://www.oracle.com/support/advanced-customer-support/products/cloud.html) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. ++1. Your Oracle Access Manager for Oracle Retail Merchandising application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows an example for this. The default value of **Unique User Identifier** is **user.userprincipalname** but Oracle Access Manager for Oracle Retail Merchandising expects this to be mapped with the user's email address. For that you can use **user.mail** attribute from the list or use the appropriate attribute value based on your organization configuration. ++  ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. ++  ++## Configure Oracle Access Manager for Oracle Retail Merchandising SSO ++To configure single sign-on on Oracle Access Manager for Oracle Retail Merchandising side, you need to send the downloaded Federation Metadata XML file from Azure portal to [Oracle Access Manager for Oracle Retail Merchandising support team](https://www.oracle.com/support/advanced-customer-support/products/cloud.html). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create Oracle Access Manager for Oracle Retail Merchandising test user ++In this section, you create a user called Britta Simon at Oracle Access Manager for Oracle Retail Merchandising. Work with [Oracle Access Manager for Oracle Retail Merchandising support team](https://www.oracle.com/support/advanced-customer-support/products/cloud.html) to add the users in the Oracle Access Manager for Oracle Retail Merchandising platform. Users must be created and activated before you use single sign-on. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++* Click on **Test this application** in Azure portal. This will redirect to Oracle Access Manager for Oracle Retail Merchandising Sign-on URL where you can initiate the login flow. ++* Go to Oracle Access Manager for Oracle Retail Merchandising Sign-on URL directly and initiate the login flow from there. ++* You can use Microsoft My Apps. When you select the Oracle Access Manager for Oracle Retail Merchandising tile in the My Apps, this will redirect to Oracle Access Manager for Oracle Retail Merchandising Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Additional resources ++* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). ++## Next steps ++Once you configure Oracle Access Manager for Oracle Retail Merchandising you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
active-directory | Oracle Idcs For Peoplesoft Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oracle-idcs-for-peoplesoft-tutorial.md | + + Title: Azure Active Directory SSO integration with Oracle IDCS for PeopleSoft +description: Learn how to configure single sign-on between Azure Active Directory and Oracle IDCS for PeopleSoft. ++++++++ Last updated : 02/07/2023+++++# Azure Active Directory SSO integration with Oracle IDCS for PeopleSoft ++In this article, you'll learn how to integrate Oracle IDCS for PeopleSoft with Azure Active Directory (Azure AD). When you integrate Oracle IDCS for PeopleSoft with Azure AD, you can: ++* Control in Azure AD who has access to Oracle IDCS for PeopleSoft. +* Enable your users to be automatically signed-in to Oracle IDCS for PeopleSoft with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++You'll configure and test Azure AD single sign-on for Oracle IDCS for PeopleSoft in a test environment. Oracle IDCS for PeopleSoft supports only **SP** initiated single sign-on. ++> [!NOTE] +> Identifier of this application is a fixed string value so only one instance can be configured in one tenant. ++## Prerequisites ++To integrate Azure Active Directory with Oracle IDCS for PeopleSoft, you need: ++* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Oracle IDCS for PeopleSoft single sign-on (SSO) enabled subscription. ++## Add application and assign a test user ++Before you begin the process of configuring single sign-on, you need to add the Oracle IDCS for PeopleSoft application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. ++### Add Oracle IDCS for PeopleSoft from the Azure AD gallery ++Add Oracle IDCS for PeopleSoft from the Azure AD application gallery to configure single sign-on with Oracle IDCS for PeopleSoft. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ++### Create and assign Azure AD test user ++Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure Azure AD SSO ++Complete the following steps to enable Azure AD single sign-on in the Azure portal. ++1. In the Azure portal, on the **Oracle IDCS for PeopleSoft** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type a URL using the following pattern: ` https://<SUBDOMAIN>.oraclecloud.com/` ++ b. In the **Reply URL** textbox, type a URL using the following pattern: `https://<SUBDOMAIN>.oraclecloud.com/v1/saml/<UNIQUEID>` ++ c. In the **Sign on URL** textbox, type a URL using the following pattern: + ` https://<SUBDOMAIN>.oraclecloud.com/` ++ >[!NOTE] + > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Oracle IDCS for PeopleSoft support team](https://www.oracle.com/support/advanced-customer-support/products/cloud.html) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. ++1. Your Oracle IDCS for PeopleSoft application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows an example for this. The default value of **Unique User Identifier** is **user.userprincipalname** but Oracle IDCS for PeopleSoft expects this to be mapped with the user's email address. For that you can use **user.mail** attribute from the list or use the appropriate attribute value based on your organization configuration. ++  ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. ++  ++## Configure Oracle IDCS for PeopleSoft SSO ++To configure single sign-on on Oracle IDCS for PeopleSoft side, you need to send the downloaded Federation Metadata XML file from Azure portal to [Oracle IDCS for PeopleSoft support team](https://www.oracle.com/support/advanced-customer-support/products/cloud.html). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create Oracle IDCS for PeopleSoft test user ++In this section, you create a user called Britta Simon at Oracle IDCS for PeopleSoft. Work with [Oracle IDCS for PeopleSoft support team](https://www.oracle.com/support/advanced-customer-support/products/cloud.html) to add the users in the Oracle IDCS for PeopleSoft platform. Users must be created and activated before you use single sign-on. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++* Click on **Test this application** in Azure portal. This will redirect to Oracle IDCS for PeopleSoft Sign-on URL where you can initiate the login flow. ++* Go to Oracle IDCS for PeopleSoft Sign-on URL directly and initiate the login flow from there. ++* You can use Microsoft My Apps. When you select the Oracle IDCS for PeopleSoft tile in the My Apps, this will redirect to Oracle IDCS for PeopleSoft Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Additional resources ++* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). ++## Next steps ++Once you configure Oracle IDCS for PeopleSoft you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
active-directory | Phenom Txm Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/phenom-txm-tutorial.md | -In this tutorial, you'll learn how to integrate Phenom TXM with Azure Active Directory (Azure AD). When you integrate Phenom TXM with Azure AD, you can: +In this tutorial, you will learn how to integrate Phenom TXM with Azure Active Directory (Azure AD). When you integrate Phenom TXM with Azure AD, you can: * Control in Azure AD who has access to Phenom TXM. * Enable your users to be automatically signed-in to Phenom TXM with their Azure AD accounts. In this tutorial, you'll learn how to integrate Phenom TXM with Azure Active Dir To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).-* Phenom TXM single sign-on (SSO) enabled subscription. +* Phenom TXM single sign-on (SSO) enabled subscription and a user account with the Client Admin role in Service Hub. * Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD. For more information, see [Azure built-in roles](../roles/permissions-reference.md). To configure the integration of Phenom TXM into Azure AD, you need to add Phenom 1. In the **Add from the gallery** section, type **Phenom TXM** in the search box. 1. Select **Phenom TXM** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. - Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) +Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) ## Configure and test Azure AD SSO for Phenom TXM -Configure and test Azure AD SSO with Phenom TXM using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Phenom TXM. +Configure and test Azure AD SSO with Phenom TXM using a test user called **B.Simon**. For SSO to work, you need to establish an assignment relationship between an Azure AD user or group and the related Phenom TXM application, ensuring that Azure AD passes the user's email address to Phenom TXM as a user identifier. To configure and test Azure AD SSO with Phenom TXM, perform the following steps: Follow these steps to enable Azure AD SSO in the Azure portal. 1. On the **Basic SAML Configuration** section, perform the following steps: - a. In the **Identifier** text box, type a URL using one of the following patterns: + a. In the **Identifier** text box, enter the **ENTITY ID** copied from Service Hub. - | **Identifier** | - |--| - | `https://<SUBDOMAIN>.phenompro.com/auth/realms/<ID>` | - | `https://<SUBDOMAIN>.phenom.com/auth/realms/<ID>` | + b. In the **Reply URL** text box, enter the **Redirect URI (ACS URL)** copied from Service Hub. - b. In the **Reply URL** text box, type a URL using one of the following patterns: + 1. In the first **Reply URL** text box, enter the **Redirect URI (ACS URL)** copied from Service Hub and set the Index value to **0**. - | Reply URL | - |--| - | `https://<SUBDOMAIN>.phenompro.com/auth/<ID>` | - | `https://<SUBDOMAIN>.phenom.com/auth/<ID>` | + 1. In the second **Reply URL** text box, enter the **Redirect URI (ACS URL) SP Initiated Flow** copied from Service Hub and set the Index value to **1** -1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode: - - In the **Sign-on URL** text box, type a URL using one of the following patterns: + > [!Note] + > Ensure that the first **Reply URL** is set as the **Default** using the checkbox. - | Sign-on URL | - |--| - | `https://<SUBDOMAIN>.phenompro.com` | - | `https://<SUBDOMAIN>.phenom.com` | +1. Perform the following step if you wish to configure the application in **SP** initiated mode: + + In the **Sign on URL** text box, type one of the following URLs: - > [!NOTE] - > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Phenom TXM Client support team](mailto:support@phenompeople.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. + | Environment | Sign on URL | + |--|-| + | Staging | `https://login-stg.phenompro.com` | + | Production | `https://login.phenom.com` | 1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. Follow these steps to enable Azure AD SSO in the Azure portal. ### Create an Azure AD test user -In this section, you'll create a test user in the Azure portal called B.Simon. +In this section, you will create a test user in the Azure portal called B.Simon. 1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**. 1. Select **New user** at the top of the screen. In this section, you'll create a test user in the Azure portal called B.Simon. ### Assign the Azure AD test user -In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Phenom TXM. +In this section, you will enable B.Simon to use Azure single sign-on by granting access to Phenom TXM. 1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Phenom TXM**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.-1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. +1. If you're expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Phenom TXM SSO -1. Log in to your Phenom TXM company site as an administrator. +1. Log in to your Phenom TXM instance Service Hub as a user with the Client Admin role. 1. Go to **Settings** tab > **Identity Provider**. In this section, you'll enable B.Simon to use Azure single sign-on by granting a  - a. Enter a valid name in the **Display Name** textbox. + a. Choose **SAML** from the dropdown selector. - b. In the **Single SignOn URL** textbox, paste the **Login URL** value which you have copied from the Azure portal. + b. Enter a valid name in the **Display Name** textbox. - c. In the **Meta data URL** textbox, paste the **App Federation Metadata Url** value which you have copied from the Azure portal. + c. In the **Single SignOn URL** textbox, paste the **Login URL** value, which you've copied from the Azure portal. - d. Click **Save Changes**. + d. In the **Meta data URL** textbox, paste the **App Federation Metadata Url** value, which you've copied from the Azure portal. e. Copy **Entity ID** value, paste this value into the **Identifier** text box in the **Basic SAML Configuration** section in the Azure portal. - f. Copy **Redirect URI (ACS URL)** value, paste this value into the **Reply URL** text box in the **Basic SAML Configuration** section in the Azure portal. + f. Copy **Redirect URI (ACS URL)** value, paste this value into the first **Reply URL** text box in the **Basic SAML Configuration** section in the Azure portal. ++ g. Copy **Redirect URI (ACS URL) SP Initiated Flow** value, paste this value into the second **Reply URL** text box in the **Basic SAML Configuration** section in the Azure portal. ### Create Phenom TXM test user -1. In a different web browser window, log into your Phenom TXM website as an administrator. +1. In a different web browser window, log in to your Phenom TXM website as an administrator. 1. Go to **Users** tab and click **Create Users** > **Create single new User**. In this section, you test your Azure AD single sign-on configuration with follow #### SP initiated: -* Click on **Test this application** in Azure portal. This will redirect to Phenom TXM Sign on URL where you can initiate the login flow. +* Click on **Test this application** in Azure portal. This will redirect to Phenom TXM Sign-on URL where you can initiate the login flow. * Go to Phenom TXM Sign-on URL directly and initiate the login flow from there. In this section, you test your Azure AD single sign-on configuration with follow * Click on **Test this application** in Azure portal and you should be automatically signed in to the Phenom TXM for which you set up the SSO. -You can also use Microsoft My Apps to test the application in any mode. When you click the Phenom TXM tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Phenom TXM for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). +You can also use Microsoft My Apps to test the application in any mode. When you click the Phenom TXM tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Phenom TXM for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ## Next steps |
advisor | Advisor Reference Operational Excellence Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-operational-excellence-recommendations.md | This cluster is not using ephemeral OS disks which can provide lower read/write Learn more about [Kubernetes service - UseEphemeralOSdisk (Use Ephemeral OS disk)](../aks/cluster-configuration.md#ephemeral-os). -### Free and Standard pricing tiers for AKS control plane management +### Free and Standard tiers for AKS control plane management -This cluster has not enabled the Standard pricing tier with the Uptime SLA feature, and is limited to an SLO of 99.5%. +This cluster has not enabled the Standard tier which includes the Uptime SLA by default, and is limited to an SLO of 99.5%. -Learn more about [Kubernetes service - UseUptimeSLA (Use Uptime SLA)](../aks/free-standard-pricing-tiers.md). +Learn more about [Kubernetes service - Free and Standard Tier](../aks/free-standard-pricing-tiers.md). ### Deprecated Kubernetes API in 1.22 has been found |
aks | Api Server Vnet Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/api-server-vnet-integration.md | az group create -l <location> -n <resource-group> ```azurecli-interactive # Create the virtual network az network vnet create -n <vnet-name> \+ -g <resource-group> \ -l <location> \ --address-prefixes 172.19.0.0/16 # Create an API server subnet-az network vnet subnet create --vnet-name <vnet-name> \ +az network vnet subnet create -g <resource-group> \ + --vnet-name <vnet-name> \ --name <apiserver-subnet-name> \ --delegations Microsoft.ContainerService/managedClusters \ --address-prefixes 172.19.0.0/28 # Create a cluster subnet-az network vnet subnet create --vnet-name <vnet-name> \ +az network vnet subnet create -g <resource-group> \ + --vnet-name <vnet-name> \ --name <cluster-subnet-name> \ --address-prefixes 172.19.1.0/24 ``` az network vnet subnet create --vnet-name <vnet-name> \ ```azurecli-interactive # Create the identity-az identity create -n <managed-identity-name> -l <location> +az identity create -g <resource-group> -n <managed-identity-name> -l <location> # Assign Network Contributor to the API server subnet az role assignment create --scope <apiserver-subnet-resource-id> \ |
aks | Azure Cni Overlay | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md | Like Azure CNI Overlay, Kubenet assigns IP addresses to pods from an address spa | Cluster scale | 1000 nodes and 250 pods/node | 400 nodes and 250 pods/node | | Network configuration | Simple - no additional configuration required for pod networking | Complex - requires route tables and UDRs on cluster subnet for pod networking | | Pod connectivity performance | Performance on par with VMs in a VNet | Additional hop adds minor latency |-| Kubernetes Network Policies | Azure Network Policies, Calico | Calico | +| Kubernetes Network Policies | Azure Network Policies, Calico, Cilium | Calico | | OS platforms supported | Linux and Windows | Linux only | ## IP address planning |
aks | Custom Node Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/custom-node-configuration.md | Create a `linuxosconfig.json` file with the following contents: Create a new cluster specifying the kubelet and OS configurations using the JSON files created in the previous step. > [!NOTE]-> When you create a cluster, you can specify the kubelet configuration, OS configuration, or both. If you specify a configuration when creating a cluster, only the nodes in the initial node pool will have that configuration applied. Any settings not configured in the JSON file will retain the default value. +> When you create a cluster, you can specify the kubelet configuration, OS configuration, or both. If you specify a configuration when creating a cluster, only the nodes in the initial node pool will have that configuration applied. Any settings not configured in the JSON file will retain the default value. CustomKubeletConfig or CustomLinuxOsConfig isn't supported for OS type: Windows. ```azurecli az aks create --name myAKSCluster --resource-group myResourceGroup --kubelet-config ./kubeletconfig.json --linux-os-config ./linuxosconfig.json |
aks | Monitor Aks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-aks.md | Application Insights provides complete monitoring of applications running on AKS - [ASP.NET Applications](../azure-monitor/app/asp-net.md) - [ASP.NET Core Applications](../azure-monitor/app/asp-net-core.md) - [.NET Console Applications](../azure-monitor/app/console.md)-- [Java](../azure-monitor/app/java-in-process-agent.md)+- [Java](../azure-monitor/app/opentelemetry-enable.md?tabs=java) - [Node.js](../azure-monitor/app/nodejs.md) - [Python](../azure-monitor/app/opencensus-python.md) - [Other platforms](../azure-monitor/app/app-insights-overview.md#supported-languages) |
aks | Resize Node Pool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/resize-node-pool.md | Title: Resize node pools in Azure Kubernetes Service (AKS) description: Learn how to resize node pools for a cluster in Azure Kubernetes Service (AKS) by cordoning and draining. Previously updated : 02/24/2022 Last updated : 02/08/2023 #Customer intent: As a cluster operator, I want to resize my node pools so that I can run more or larger workloads. node/aks-nodepool1-31721111-vmss000002 cordoned ## Drain the existing nodes > [!IMPORTANT]-> To successfully drain nodes and evict running pods, ensure that any PodDisruptionBudgets (PDBs) allow for at least 1 pod replica to be moved at a time, otherwise the drain/evict operation will fail. To check this, you can run `kubectl get pdb -A` and make sure `ALLOWED DISRUPTIONS` is at least 1 or higher. +> To successfully drain nodes and evict running pods, ensure that any PodDisruptionBudgets (PDBs) allow for at least one pod replica to be moved at a time. Otherwise, the drain/evict operation will fail. To check this, you can run `kubectl get pdb -A` and verify `ALLOWED DISRUPTIONS` is at least one or higher. Draining nodes will cause pods running on them to be evicted and recreated on the other, schedulable nodes. |
aks | Update Credentials | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/update-credentials.md | You may also have [integrated your AKS cluster with Azure Active Directory (Azur Alternatively, you can use a managed identity for permissions instead of a service principal. Managed identities are easier to manage than service principals and do not require updates or rotations. For more information, see [Use managed identities](use-managed-identity.md). +> [!NOTE] +> - When you use the `az aks create` command to generate the service principal automatically, the service principal credentials are written to the file `~/.azure/aksServicePrincipal.json` on the machine used to run the command +> - If you don't specify a service principal with Azure CLI commands, the default service principal located at `~/.azure/aksServicePrincipal.json` is used + ## Before you begin You need the Azure CLI version 2.0.65 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. |
aks | Web App Routing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/web-app-routing.md | apiVersion: apps/v1 kind: Deployment metadata: name: aks-helloworld + namespace: hello-web-app-routing spec: replicas: 1 selector: apiVersion: v1 kind: Service metadata: name: aks-helloworld+ namespace: hello-web-app-routing spec: type: ClusterIP ports: When the Web Application Routing add-on is disabled, some Kubernetes resources m [kubectl-delete]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete [kubectl-logs]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs [ingress]: https://kubernetes.io/docs/concepts/services-networking/ingress/-[ingress-resource]: https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource +[ingress-resource]: https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource |
aks | Workload Identity Deploy Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-deploy-cluster.md | Title: Deploy and configure an Azure Kubernetes Service (AKS) cluster with workload identity (preview) description: In this Azure Kubernetes Service (AKS) article, you deploy an Azure Kubernetes Service cluster and configure it with an Azure AD workload identity (preview).- Last updated 01/11/2023 |
aks | Workload Identity Migrate From Pod Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-migrate-from-pod-identity.md | Title: Modernize your Azure Kubernetes Service (AKS) application to use workload identity + Title: Modernize your Azure Kubernetes Service (AKS) application to use workload identity (preview) description: In this Azure Kubernetes Service (AKS) article, you learn how to configure your Azure Kubernetes Service pod to authenticate with workload identity.- Previously updated : 11/3/2022 Last updated : 02/08/2023 -# Modernize application authentication with workload identity +# Modernize application authentication with workload identity (preview) This article focuses on pod-managed identity migration to Azure Active Directory (Azure AD) workload identity (preview) for your Azure Kubernetes Service (AKS) cluster. It also provides guidance depending on the version of the [Azure Identity][azure-identity-supported-versions] client library used by your container-based application. spec: This configuration applies to any configuration where a pod is being created. After updating or deploying your application, you can verify the pod is in a running state using the [kubectl describe pod][kubectl-describe] command. Replace the value `podName` with the image name of your deployed pod. ```bash-kubectl describe pods podName -c azwi-proxy +kubectl describe pods podName ``` To verify that pod is passing IMDS transactions, use the [kubectl logs][kubelet-logs] command. Replace the value `podName` with the image name of your deployed pod: This article showed you how to set up your pod to authenticate using a workload <!-- EXTERNAL LINKS --> [kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe+[kubelet-logs]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs |
aks | Workload Identity Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-overview.md | Title: Use an Azure AD workload identities (preview) on Azure Kubernetes Service (AKS) description: Learn about Azure Active Directory workload identity (preview) for Azure Kubernetes Service (AKS) and how to migrate your application to authenticate using this identity. - Last updated 01/06/2023 |
api-management | Api Management Debug Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-debug-policies.md | This article describes how to debug API Management policies using the [Azure API ## Restrictions and limitations -This feature is only available in the Developer tier of API Management. Each API Management instance supports only one concurrent debugging session. +* This feature is only available in the **Developer** tier of API Management. Each API Management instance supports only one concurrent debugging session. ++* This feature uses the built-in (service-level) all-access subscription for debugging. The [**Allow tracing**](api-management-howto-api-inspector.md#verify-allow-tracing-setting) setting must be enabled in this subscription. + ## Initiate a debugging session |
api-management | Trace Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/trace-policy.md | The `trace` policy adds a custom trace into the request tracing output in the te - The policy adds a property in the log entry when [resource logs](./api-management-howto-use-azure-monitor.md#resource-logs) are enabled and the severity level specified in the policy is at or higher than the verbosity level specified in the diagnostic setting. - The policy is not affected by Application Insights sampling. All invocations of the policy will be logged. + [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)] ## Policy statement |
app-service | App Service Plan Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-plan-manage.md | You can create an empty App Service plan, or you can create a plan as part of ap ## Move an app to another App Service plan -You can move an app to another App Service plan, as long as the source plan and the target plan are in the _same resource group and geographical region_. +You can move an app to another App Service plan, as long as the source plan and the target plan are in the _same resource group, geographical region,and of the same OS type_. Any change in type such as Windows to Linux or any type that is different from the originating type is not supported. + > [!NOTE] > Azure deploys each new App Service plan into a deployment unit, internally called a webspace. Each region can have many webspaces, but your app can only move between plans that are created in the same webspace. An App Service Environment can have multiple webspaces, but your app can only move between plans that are created in the same webspace. |
app-service | Creation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/creation.md | -Be aware that after you create your App Service Environment, you can't change any of the following: +After you create your App Service Environment, you can't change any of the following: - Location - Subscription Make your subnet large enough to hold the maximum size that you'll scale your Ap Before you deploy your App Service Environment, think about the virtual IP (VIP) type and the deployment type. -With an *internal VIP*, an address in your App Service Environment subnet reaches your apps. Your apps aren't on a public DNS. When you create your App Service Environment in the Azure portal, you have an option to create an Azure private DNS zone for your App Service Environment. With an *external VIP*, your apps are on an address facing the public internet, and they're in a public DNS. +With an *internal VIP*, an address in your App Service Environment subnet reaches your apps. Your apps aren't on a public DNS. When you create your App Service Environment in the Azure portal, you have an option to create an Azure private DNS zone for your App Service Environment. With an *external VIP*, your apps are on an address facing the public internet, and they're in a public DNS. For both *internal VIP* and *external VIP* you can specify *Inbound IP address*, you can select *Automatic* or *Manual* options. If you want to use the *Manual* option for an *external VIP*, you must first create a standard *Public IP address* in Azure. -For the deployment type, you can choose *single zone*, *zone redundant*, or *host group*. The single zone is available in all regions where App Service Environment v3 is available. With the single zone deployment type, you have a minimum charge in your App Service plan of one instance of Windows Isolated v2. As soon as you have one or more instances, then that charge goes away. It isn't an additive charge. +For the deployment type, you can choose *single zone*, *zone redundant*, or *host group*. The single zone is available in all regions where App Service Environment v3 is available. With the single zone deployment type, you have a minimum charge in your App Service plan of one instance of Windows Isolated v2. As soon as you've one or more instances, then that charge goes away. It isn't an additive charge. -In a zone redundant App Service Environment, your apps spread across three zones in the same region. Zone redundant is available in regions that support availability zones. With this deployment type, the smallest size for your App Service plan is three instances. That ensures that there is an instance in each availability zone. App Service plans can be scaled up one or more instances at a time. Scaling doesn't need to be in units of three, but the app is only balanced across all availability zones when the total instances are multiples of three. +In a zone redundant App Service Environment, your apps spread across three zones in the same region. Zone redundant is available in regions that support availability zones. With this deployment type, the smallest size for your App Service plan is three instances. That ensures that there's an instance in each availability zone. App Service plans can be scaled up one or more instances at a time. Scaling doesn't need to be in units of three, but the app is only balanced across all availability zones when the total instances are multiples of three. -A zone redundant deployment has triple the infrastructure, and ensures that even if two of the three zones go down, your workloads remain available. Due to the increased system need, the minimum charge for a zone redundant App Service Environment is nine instances. If you have fewer than this number of instances, the difference is charged as Windows I1v2. If you have nine or more instances, there is no added charge to have a zone redundant App Service Environment. To learn more about zone redundancy, see [Regions and availability zones](./overview-zone-redundancy.md). +A zone redundant deployment has triple the infrastructure, and ensures that even if two of the three zones go down, your workloads remain available. Due to the increased system need, the minimum charge for a zone redundant App Service Environment is nine instances. If you've fewer than this number of instances, the difference is charged as Windows I1v2. If you've nine or more instances, there's no added charge to have a zone redundant App Service Environment. To learn more about zone redundancy, see [Regions and availability zones](./overview-zone-redundancy.md). -In a host group deployment, your apps are deployed onto a dedicated host group. The dedicated host group isn't zone redundant. With this type of deployment, you can install and use your App Service Environment on dedicated hardware. There is no minimum instance charge for using App Service Environment on a dedicated host group, but you do have to pay for the host group when you're provisioning the App Service Environment. You also pay a discounted App Service plan rate as you create your plans and scale out. +In a host group deployment, your apps are deployed onto a dedicated host group. The dedicated host group isn't zone redundant. With this type of deployment, you can install and use your App Service Environment on dedicated hardware. There's no minimum instance charge for using App Service Environment on a dedicated host group, but you do have to pay for the host group when you're provisioning the App Service Environment. You also pay a discounted App Service plan rate as you create your plans and scale out. With a dedicated host group deployment, there are a finite number of cores available that are used by both the App Service plans and the infrastructure roles. This type of deployment can't reach the 200 total instance count normally available in App Service Environment. The number of total instances possible is related to the total number of App Service plan instances, plus the load-based number of infrastructure roles. Here's how: 1. Search Azure Marketplace for *App Service Environment v3*. -1. From the **Basics** tab, for **Subscription**, select the subscription. For **Resource Group**, select or create the resource group, and enter the name of your App Service Environment. For **Virtual IP**, select **Internal** if you want your inbound address to be an address in your subnet. Select **External** if you want your inbound address to face the public internet. For **App Service Environment Name**, enter a name. The name you choose will also be used for the domain suffix. For example, if the name you choose is *contoso*, and you have an internal VIP, the domain suffix will be `contoso.appserviceenvironment.net`. If the name you choose is *contoso*, and you have an external VIP, the domain suffix will be `contoso.p.azurewebsites.net`. +2. From the **Basics** tab, for **Subscription**, select the subscription. For **Resource Group**, select or create the resource group, and enter the name of your App Service Environment. For **Virtual IP**, select **Internal** if you want your inbound address to be an address in your subnet. Select **External** if you want your inbound address to face the public internet. For **App Service Environment Name**, enter a name. The name you choose will also be used for the domain suffix. For example, if the name you choose is *contoso*, and you have an internal VIP, the domain suffix will be `contoso.appserviceenvironment.net`. If the name you choose is *contoso*, and you have an external VIP, the domain suffix will be `contoso.p.azurewebsites.net`.  -1. From the **Hosting** tab, for **Host group deployment**, select **Enabled** or **Disabled**. If you enable this option, you can deploy onto dedicated hardware. If you do so, you're charged for the entire dedicated host during the creation of the App Service Environment, and then you're charged a reduced price for your App Service plan instances. +3. From the **Hosting** tab, for **Physical hardware isolation**, select **Enabled** or **Disabled**. If you enable this option, you can deploy onto dedicated hardware. With a dedicated host deployment, you're charged for two dedicated hosts per our pricing when you create the App Service Environment v3 and then, as you scale, you're charged a specialized Isolated v2 rate per vCore. I1v2 uses two vCores, I2v2 uses four vCores, and I3v2 uses eight vCores per instance.  -1. From the **Networking** tab, for **Virtual Network**, select or create your virtual network. For **Subnet**, select or create your subnet. If you're creating an App Service Environment with an internal VIP, you can configure Azure DNS private zones to point your domain suffix to your App Service Environment. For more details, see the DNS section in [Use an App Service Environment][UsingASE]. +4. From the **Networking** tab, for **Virtual Network**, select or create your virtual network. For **Subnet**, select or create your subnet. If you're creating an App Service Environment with an internal VIP, you can configure Azure DNS private zones to point your domain suffix to your App Service Environment. For more information, see the DNS section in [Use an App Service Environment][UsingASE]. If you're creating an App Service Environment with an internal VIP, you can specify private IP address using **Manual** option for **Inbound IP address**. -  +  -1. From the **Review + create** tab, check that your configuration is correct, and select **Create**. Your App Service Environment can take up to two hours to create. +If you're creating an App Service Environment with an external VIP, you can select public IP address using **Manual** option for **Inbound IP address**. ++ ++5. From the **Review + create** tab, check that your configuration is correct, and select **Create**. Your App Service Environment can take up to two hours to create. When your App Service Environment has been successfully created, you can select it as a location when you're creating your apps. |
app-service | Tutorial Multi Region App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-multi-region-app.md | + + Title: 'Tutorial: Create a multi-region app' +description: Learn how to build a multi-region app on Azure App Service that can be used for high availability and fault tolerance. +keywords: azure app service, web app, multiregion, multi-region, multiple regions ++ Last updated : 2/8/2023++++# Tutorial: Create a highly available multi-region app in Azure App Service ++High availability and fault tolerance are key components of a well-architected solution. ItΓÇÖs best to prepare for the unexpected by having an emergency plan that can shorten downtime and keep your systems up and running automatically when something fails. ++When you deploy your application to the cloud, you choose a region in that cloud where your application infrastructure is based. If your application is deployed to a single region, and the region becomes unavailable, your application will also be unavailable. This lack of availability may be unacceptable under the terms of your application's SLA. If so, deploying your application and its services across multiple regions is a good solution. ++In this tutorial, you'll learn how to deploy a highly available multi-region web app. This scenario will be kept simple by restricting the application components to just a web app and [Azure Front Door](../frontdoor/front-door-overview.md), but the concepts can be expanded and applied to other infrastructure patterns. For example, if your application connects to an Azure database offering or storage account, see [active geo-replication for SQL databases](/azure/azure-sql/database/active-geo-replication-overview) and [redundancy options for storage accounts](../storage/common/storage-redundancy.md). For a reference architecture for a more detailed scenario, see [Highly available multi-region web application](/azure/architecture/reference-architectures/app-service-web-app/multi-region). ++The following architecture diagram shows the infrastructure you'll be creating during this tutorial. It consists of two identical App Services in separate regions, one being the active or primary region, and the other is the standby or secondary region. Azure Front Door is used to route traffic to the App Services and access restrictions are configured so that direct access to the apps from the internet is blocked. The dotted line indicates that traffic will only be sent to the standby region if the active region goes down. ++Azure provides various options for load balancing and traffic routing. Azure Front Door was selected for this use case because it involves internet facing web apps hosted on Azure App Service deployed in multiple regions. To help you decide what to use for your use case if it differs from this tutorial, see the [decision tree for load balancing in Azure](/azure/architecture/guide/technology-choices/load-balancing-overview). +++With this architecture: ++- Identical App Service apps are deployed in two separate regions. +- Public traffic directly to the App Service apps is blocked. +- Azure Front Door is used route traffic to the primary/active region. The secondary region has an App Service that's up and running and ready to serve traffic if needed. ++What you'll learn: ++> [!div class="checklist"] +> * Create identical App Services in separate regions. +> * Create Azure Front Door with access restrictions that block public access to the App Services. ++## Prerequisites +++To complete this tutorial: +++## Create two instances of a web app ++You'll need two instances of a web app that run in different Azure regions for this tutorial. You'll use the [region pair](../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) East US/West US as your two regions and create two empty web apps. Feel free to choose you're own regions if needed. ++To make management and clean-up simpler, you'll use a single resource group for all resources in this tutorial. Consider using separate resource groups for each region/resource to further isolate your resources in a disaster recovery situation. ++Run the following command to create your resource group. ++```azurecli-interactive +az group create --name myresourcegroup --location eastus +``` ++### Create App Service plans ++Run the following commands to create the App Service plans. Replace the placeholders for `<app-service-plan-east-us>` and `<app-service-plan-west-us>` with two unique names where you can easily identify the region they're in. ++```azurecli-interactive +az appservice plan create --name <app-service-plan-east-us> --resource-group myresourcegroup --is-linux --location eastus ++az appservice plan create --name <app-service-plan-west-us> --resource-group myresourcegroup --is-linux --location westus +``` ++### Create web apps ++Once the App Service plans are created, run the following commands to create the web apps. Replace the placeholders for `<web-app-east-us>` and `<web-app-west-us>` with two globally unique names (valid characters are `a-z`, `0-9`, and `-`) and be sure to pay attention to the `--plan` parameter so that you place one app in each plan (and therefore in each region). Replace the `<runtime>` parameter with the language version of your app. Run `az webapp list-runtimes` for the list of available runtimes. If you plan on using the sample Node.js app given in this tutorial in the following sections, use "NODE:18-lts" as your runtime. ++```azurecli-interactive +az webapp create --name <web-app-east-us> --resource-group myresourcegroup --plan <app-service-plan-east-us> --runtime <runtime> ++az webapp create --name <web-app-west-us> --resource-group myresourcegroup --plan <app-service-plan-west-us> --runtime <runtime> +``` ++Make note of the default hostname of each web app so you can define the backend addresses when you deploy the Front Door in the next step. It should be in the format `<web-app-name>.azurewebsites.net`. These hostnames can be found by running the following command or by navigating to the app's **Overview** page in the [Azure portal](https://portal.azure.com). ++```azurecli-interactive +az webapp show --name <web-app-name> --resource-group myresourcegroup --query "hostNames" +``` ++## Create an Azure Front Door ++A multi-region deployment can use an active-active or active-passive configuration. An active-active configuration distributes requests across multiple active regions. An active-passive configuration keeps running instances in the secondary region, but doesn't send traffic there unless the primary region fails. Azure Front Door has a built-in feature that allows you to enable these configurations. For more information on designing apps for high availability and fault tolerance, see [Architect Azure applications for resiliency and availability](/azure/architecture/reliability/architect). ++### Create an Azure Front Door profile ++You'll now create an [Azure Front Door Premium](../frontdoor/front-door-overview.md) to route traffic to your apps. ++Run [az afd profile create](/cli/azure/afd/profile#az-afd-profile-create) to create an Azure Front Door profile. ++> [!NOTE] +> If you want to deploy Azure Front Door Standard instead of Premium, substitute the value of the `--sku` parameter with Standard_AzureFrontDoor. You won't be able to deploy managed rules with WAF Policy if you choose the Standard SKU. For a detailed comparison of the SKUs, see [Azure Front Door tier comparison](../frontdoor/standard-premium/tier-comparison.md). ++```azurecli-interactive +az afd profile create --profile-name myfrontdoorprofile --resource-group myresourcegroup --sku Premium_AzureFrontDoor +``` ++|Parameter |Value |Description | +|||| +|profile-name |myfrontdoorprofile |Name for the Azure Front Door profile, which is unique within the resource group. | +|resource-group |myresourcegroup |The resource group that contains the resources from this tutorial. | +|sku |Premium_AzureFrontDoor |The pricing tier of the Azure Front Door profile. | +++### Add an endpoint ++Run [az afd endpoint create](/cli/azure/afd/endpoint#az-afd-endpoint-create) to create an endpoint in your profile. You can create multiple endpoints in your profile after finishing the create experience. ++```azurecli-interactive +az afd endpoint create --resource-group myresourcegroup --endpoint-name myendpoint --profile-name myfrontdoorprofile --enabled-state Enabled +``` ++|Parameter |Value |Description | +|||| +|endpoint-name |myendpoint |Name of the endpoint under the profile, which is unique globally. | +|enabled-state |Enabled |Whether to enable this endpoint. | ++### Create an origin group ++Run [az afd origin-group create](/cli/azure/afd/origin-group#az-afd-origin-group-create) to create an origin group that contains your two web apps. ++```azurecli-interactive +az afd origin-group create --resource-group myresourcegroup --origin-group-name myorigingroup --profile-name myfrontdoorprofile --probe-request-type GET --probe-protocol Http --probe-interval-in-seconds 60 --probe-path / --sample-size 4 --successful-samples-required 3 --additional-latency-in-milliseconds 50 +``` ++|Parameter |Value |Description | +|||| +|origin-group-name |myorigingroup |Name of the origin group. | +|probe-request-type |GET |The type of health probe request that is made. | +|probe-protocol |Http |Protocol to use for health probe. | +|probe-interval-in-seconds |60 |The number of seconds between health probes. | +|probe-path |/ |The path relative to the origin that is used to determine the health of the origin. | +|sample-size |4 |The number of samples to consider for load balancing decisions. | +|successful-samples-required |3 |The number of samples within the sample period that must succeed. | +|additional-latency-in-milliseconds |50 |The additional latency in milliseconds for probes to fall into the lowest latency bucket. | ++### Add an origin to the group ++Run [az afd origin create](/cli/azure/afd/origin#az-afd-origin-create) to add an origin to your origin group. For the `--host-name` parameter, replace the placeholder for `<web-app-east-us>` with your app name in that region. Notice the `--priority` parameter is set to "1", which indicates all traffic will be sent to your primary app. ++```azurecli-interactive +az afd origin create --resource-group myresourcegroup --host-name <web-app-east-us>.azurewebsites.net --profile-name myfrontdoorprofile --origin-group-name myorigingroup --origin-name primaryapp --origin-host-header <web-app-east-us>.azurewebsites.net --priority 1 --weight 1000 --enabled-state Enabled --http-port 80 --https-port 443 +``` ++|Parameter |Value |Description | +|||| +|host-name |`<web-app-east-us>.azurewebsites.net` |The hostname of the primary web app. | +|origin-name |primaryapp |Name of the origin. | +|origin-host-header |`<web-app-east-us>.azurewebsites.net` |The host header to send for requests to this origin. If you leave this blank, the request hostname determines this value. Azure CDN origins, such as Web Apps, Blob Storage, and Cloud Services require this host header value to match the origin hostname by default. | +|priority |1 |Set this parameter to 1 to direct all traffic to the primary web app. | +|weight |1000 |Weight of the origin in given origin group for load balancing. Must be between 1 and 1000. | +|enabled-state |Enabled |Whether to enable this origin. | +|http-port |80 |The port used for HTTP requests to the origin. | +|https-port |443 |The port used for HTTPS requests to the origin. | ++Repeat this step to add your second origin. Pay attention to the `--priority` parameter. For this origin, it's set to "2". This priority setting tells Azure Front Door to direct all traffic to the primary origin unless the primary goes down. Be sure to replace both instances of the placeholder for `<web-app-west-us>` with the name of that web app. ++```azurecli-interactive +az afd origin create --resource-group myresourcegroup --host-name <web-app-west-us>.azurewebsites.net --profile-name myfrontdoorprofile --origin-group-name myorigingroup --origin-name secondaryapp --origin-host-header <web-app-west-us>.azurewebsites.net --priority 2 --weight 1000 --enabled-state Enabled --http-port 80 --https-port 443 +``` ++### Add a route ++Run [az afd route create](/cli/azure/afd/route#az-afd-route-create) to map your endpoint to the origin group. This route forwards requests from the endpoint to your origin group. ++```azurecli-interactive +az afd route create --resource-group myresourcegroup --profile-name myfrontdoorprofile --endpoint-name myendpoint --forwarding-protocol MatchRequest --route-name route --https-redirect Enabled --origin-group myorigingroup --supported-protocols Http Https --link-to-default-domain Enabled +``` ++|Parameter |Value |Description | +|||| +|endpoint-name |myendpoint |Name of the endpoint. | +|forwarding-protocol |MatchRequest |Protocol this rule will use when forwarding traffic to backends. | +|route-name |route |Name of the route. | +|https-redirect |Enabled |Whether to automatically redirect HTTP traffic to HTTPS traffic. | +|supported-protocols |Http Https |List of supported protocols for this route. | +|link-to-default-domain |Enabled |Whether this route will be linked to the default endpoint domain. | ++Allow about 15 minutes for this step to complete as it takes some time for this change to propagate globally. After this period, your Azure Front Door will be fully functional. ++### Restrict access to web apps to the Azure Front Door instance ++If you try to access your apps directly using their URLs at this point, you'll still be able to. To ensure traffic can only reach your apps through Azure Front Door, you'll set access restrictions on each of your apps. Front Door's features work best when traffic only flows through Front Door. You should configure your origins to block traffic that hasn't been sent through Front Door. Otherwise, traffic might bypass Front Door's web application firewall, DDoS protection, and other security features. Traffic from Azure Front Door to your applications originates from a well known set of IP ranges defined in the AzureFrontDoor.Backend service tag. By using a service tag restriction rule, you can [restrict traffic to only originate from Azure Front Door](../frontdoor/origin-security.md). ++Before setting up the App Service access restrictions, take note of the *Front Door ID* by running the following command. This ID will be needed to ensure traffic only originates from your specific Front Door instance. The access restriction further filters the incoming requests based on the unique HTTP header that your Azure Front Door sends. ++```azurecli-interactive +az afd profile show --resource-group myresourcegroup --profile-name myfrontdoorprofile --query "frontDoorId" +``` ++Run the following commands to set the access restrictions on your web apps. Replace the placeholder for `<front-door-id>` with the result from the previous command. Replace the placeholders for the app names. ++```azurecli-interactive +az webapp config access-restriction add --resource-group myresourcegroup -n <web-app-east-us> --priority 100 --service-tag AzureFrontDoor.Backend --http-header x-azure-fdid=<front-door-id> ++az webapp config access-restriction add --resource-group myresourcegroup -n <web-app-west-us> --priority 100 --service-tag AzureFrontDoor.Backend --http-header x-azure-fdid=<front-door-id> +``` ++## Test the Front Door ++When you create the Azure Front Door Standard/Premium profile, it takes a few minutes for the configuration to be deployed globally. Once completed, you can access the frontend host you created. ++Run [az afd endpoint show](/cli/azure/afd/endpoint#az-afd-endpoint-show) to get the hostname of the Front Door endpoint. ++```azurecli-interactive +az afd endpoint show --resource-group myresourcegroup --profile-name myfrontdoorprofile --endpoint-name myendpoint --query "hostName" +``` ++In a browser, go to the endpoint hostname that the previous command returned: `<myendpoint>-<hash>.z01.azurefd.net`. Your request will automatically get routed to the primary app in East US. ++To test instant global failover: ++1. Open a browser and go to the endpoint hostname: `<myendpoint>-<hash>.z01.azurefd.net`. +1. Stop the primary app by running [az webapp stop](/cli/azure/webapp#az-webapp-stop&preserve-view=true). ++ ```azurecli-interactive + az webapp stop --name <web-app-east-us> --resource-group myresourcegroup + ``` ++1. Refresh your browser. You should see the same information page because traffic is now directed to the running app in West US. ++ > [!TIP] + > You might need to refresh the page a couple times as failover may take a couple seconds. ++1. Now stop the secondary app. ++ ```azurecli-interactive + az webapp stop --name <web-app-west-us> --resource-group myresourcegroup + ``` ++1. Refresh your browser. This time, you should see an error message. ++ :::image type="content" source="../frontdoor/media/create-front-door-portal/web-app-stopped-message.png" alt-text="Screenshot of the message: Both instances of the web app stopped."::: ++1. Restart one of the Web Apps by running [az webapp start](/cli/azure/webapp#az-webapp-start&preserve-view=true). Refresh your browser and you should see the app again. ++ ```azurecli-interactive + az webapp start --name <web-app-east-us> --resource-group myresourcegroup + ``` ++You've now validated that you can access your apps through Azure Front Door and that failover functions as intended. Restart your other app if you're done with failover testing. ++To test your access restrictions and ensure your apps can only be reached through Azure Front Door, open a browser and navigate to each of your app's URLs. To find the URLs, run the following commands: ++```azurecli-interactive +az webapp show --name <web-app-east-us> --resource-group myresourcegroup --query "hostNames" ++az webapp show --name <web-app-west-us> --resource-group myresourcegroup --query "hostNames" +``` ++You should see an error page indicating that the apps aren't accessible. ++## Clean up resources ++In the preceding steps, you created Azure resources in a resource group. If you don't expect to need these resources in the future, delete the resource group by running the following command in the Cloud Shell. ++```azurecli-interactive +az group delete --name myresourcegroup +``` ++This command may take a few minutes to run. ++## Deploy from ARM/Bicep ++The resources you created in this tutorial can be deployed using an ARM/Bicep template. The [Highly available multi-region web app Bicep template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-multi-region-front-door) allows you to create a secure, highly available, multi-region end to end solution with two web apps in different regions behind Azure Front Door. ++To learn how to deploy ARM/Bicep templates, see [How to deploy resources with Bicep and Azure CLI](../azure-resource-manager/bicep/deploy-cli.md). ++## Frequently asked questions ++In this tutorial so far, you've deployed the baseline infrastructure to enable a multi-region web app. App Service provides features that can help you ensure you're running applications following security best practices and recommendations. ++This section contains frequently asked questions that can help you further secure your apps and deploy and manage your resources using best practices. ++### What is the recommended method for managing and deploying application infrastructure and Azure resources? ++For this tutorial, you used the Azure CLI to deploy your infrastructure resources. Consider configuring a continuous deployment mechanism to manage your application infrastructure. Since you're deploying resources in different regions, you'll need to independently manage those resources across the regions. To ensure the resources are identical across each region, infrastructure as code (IaC) such as [Azure Resource Manager templates](../azure-resource-manager/management/overview.md) or [Terraform](/azure/developer/terraform/overview) should be used with deployment pipelines such as [Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines.md) or [GitHub Actions](https://docs.github.com/actions). This way, if configured appropriately, any change to resources would trigger updates across all regions you're deployed to. For more information, see [Continuous deployment to Azure App Service](deploy-continuous-deployment.md). ++### How can I use staging slots to practice safe deployment to production? ++Deploying your application code directly to production apps/slots isn't recommended. This is because you'd want to have a safe place to test your apps and validate changes you've made before pushing to production. Use a combination of staging slots and slot swap to move code from your testing environment to production. ++You already created the baseline infrastructure for this scenario. You'll now create deployment slots for each instance of your app and configure continuous deployment to these staging slots with GitHub Actions. As with infrastructure management, configuring continuous deployment for your application source code is also recommended to ensure changes across regions are in sync. If you donΓÇÖt configure continuous deployment, youΓÇÖll need to manually update each app in each region every time there's a code change. ++For the remaining steps in this tutorial, you should have an app ready to deploy to your App Services. If you need a sample app, you can use the [Node.js Hello World sample app](https://github.com/Azure-Samples/nodejs-docs-hello-world). Fork that repository so you have your own copy. ++Be sure to set the App Service stack settings for your apps. Stack settings refer to the language or runtime used for your app. This setting can be configured using the Azure CLI with the `az webapp config set` command or in the portal with the following steps. If you use the Node.js sample, set the stack settings to "Node 18 LTS". ++1. Going to your app and selecting **Configuration** in the left-hand table of contents. +1. Select the **General settings** tab. +1. Under **Stack settings**, choose the appropriate value for your app. +1. Select **Save** and then **Continue** to confirm the update. +1. Repeat these steps for your other apps. ++Run the following commands to create staging slots called "stage" for each of your apps. Replace the placeholders for `<web-app-east-us>` and `<web-app-west-us>` with your app names. ++```azurecli-interactive +az webapp deployment slot create --resource-group myresourcegroup --name <web-app-east-us> --slot stage --configuration-source <web-app-east-us> ++az webapp deployment slot create --resource-group myresourcegroup --name <web-app-west-us> --slot stage --configuration-source <web-app-west-us> +``` ++To set up continuous deployment, you should use the Azure portal. For detailed guidance on how to configure continuous deployment with providers such as GitHub Actions, see [Continuous deployment to Azure App Service](deploy-continuous-deployment.md). ++To configure continuous deployment with GitHub Actions, complete the following steps for each of your staging slots. ++1. In the [Azure portal](https://portal.azure.com), go to the management page for one of your App Service app slots. +1. In the left pane, select **Deployment Center**. Then select **Settings**. +1. In the **Source** box, select "GitHub" from the CI/CD options: ++ :::image type="content" source="media/app-service-continuous-deployment/choose-source.png" alt-text="Screenshot that shows how to choose the deployment source"::: ++1. If you're deploying from GitHub for the first time, select **Authorize** and follow the authorization prompts. If you want to deploy from a different user's repository, select **Change Account**. +1. After you authorize your Azure account with GitHub, select the **Organization**, **Repository**, and **Branch** to configure CI/CD for. If you canΓÇÖt find an organization or repository, you might need to enable more permissions on GitHub. For more information, see [Managing access to your organization's repositories](https://docs.github.com/organizations/managing-access-to-your-organizations-repositories). ++ 1. If you're using the Node.js sample app, use the following settings. + + |Setting |Value | + |--|--| + |Organization |`<your-GitHub-organization>` | + |Repository |nodejs-docs-hello-world | + |Branch |main | + +1. Select **Save**. ++ New commits in the selected repository and branch now deploy continuously into your App Service app slot. You can track the commits and deployments on the **Logs** tab. ++A default workflow file that uses a publish profile to authenticate to App Service is added to your GitHub repository. You can view this file by going to the `<repo-name>/.github/workflows/` directory. ++### How do I disable basic auth on App Service? ++Consider [disabling basic auth on App Service](https://azure.github.io/AppService/2020/08/10/securing-data-plane-access.html), which limits access to the FTP and SCM endpoints to users that are backed by Azure Active Directory (Azure AD). If using a continuous deployment tool to deploy your application source code, disabling basic auth will require [extra steps to configure continuous deployment](deploy-github-actions.md). For example, you won't be able to use a publish profile since that authentication mechanism doesn't use Azure AD backed credentials. Instead, you'll need to use either a [service principal or OpenID Connect](deploy-github-actions.md#generate-deployment-credentials). ++To disable basic auth for your App Service, run the following commands for each app and slot by replacing the placeholders for `<web-app-east-us>` and `<web-app-west-us>` with your app names. The first set of commands disables FTP access for the production sites and staging slots, and the second set of commands disables basic auth access to the WebDeploy port and SCM site for the production sites and staging slots. ++```azurecli-interactive +az resource update --resource-group myresourcegroup --name ftp --namespace Microsoft.Web --resource-type basicPublishingCredentialsPolicies --parent sites/<web-app-east-us> --set properties.allow=false ++az resource update --resource-group myresourcegroup --name ftp --namespace Microsoft.Web --resource-type basicPublishingCredentialsPolicies --parent sites/<web-app-east-us>/slots/stage --set properties.allow=false ++az resource update --resource-group myresourcegroup --name ftp --namespace Microsoft.Web --resource-type basicPublishingCredentialsPolicies --parent sites/<web-app-west-us> --set properties.allow=false ++az resource update --resource-group myresourcegroup --name ftp --namespace Microsoft.Web --resource-type basicPublishingCredentialsPolicies --parent sites/<web-app-west-us>/slots/stage --set properties.allow=false +``` ++```azurecli-interactive +az resource update --resource-group myresourcegroup --name scm --namespace Microsoft.Web --resource-type basicPublishingCredentialsPolicies --parent sites/<web-app-east-us> --set properties.allow=false ++az resource update --resource-group myresourcegroup --name scm --namespace Microsoft.Web --resource-type basicPublishingCredentialsPolicies --parent sites/<web-app-east-us>/slots/stage --set properties.allow=false ++az resource update --resource-group myresourcegroup --name scm --namespace Microsoft.Web --resource-type basicPublishingCredentialsPolicies --parent sites/<web-app-west-us> --set properties.allow=false ++az resource update --resource-group myresourcegroup --name scm --namespace Microsoft.Web --resource-type basicPublishingCredentialsPolicies --parent sites/<web-app-west-us>/slots/stage --set properties.allow=false +``` ++For more information on disabling basic auth including how to test and monitor logins, see [Disabling basic auth on App Service](https://azure.github.io/AppService/2020/08/10/securing-data-plane-access.html). ++### How do I deploy my code using continuous deployment if I disabled basic auth? ++If you choose to allow basic auth on your App Service apps, you can use any of the available deployment methods on App Service, including using the publish profile that was configured in the [staging slots](#how-can-i-use-staging-slots-to-practice-safe-deployment-to-production) section. ++If you disable basic auth for your App Services, continuous deployment requires a service principal or OpenID Connect for authentication. If you use GitHub Actions as your code repository, see the [step-by-step tutorial for using a service principal or OpenID Connect to deploy to App Service using GitHub Actions](deploy-github-actions.md) or complete the steps in the following section. ++#### Create the service principal and configure credentials with GitHub Actions ++To configure continuous deployment with GitHub Actions and a service principal, use the following steps. ++1. Run the following command to create the [service principal](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object). Replace the placeholders with your `<subscription-id>` and app names. The output is a JSON object with the role assignment credentials that provide access to your App Service apps. Copy this JSON object for the next step. It will include your client secret, which will only be visible at this time. It's always a good practice to grant minimum access. The scope in this example is limited to just the apps, not the entire resource group. ++ ```bash + az ad sp create-for-rbac --name "myApp" --role contributor --scopes /subscriptions/<subscription-id>/resourceGroups/myresourcegroup/providers/Microsoft.Web/sites/<web-app-east-us> /subscriptions/<subscription-id>/resourceGroups/myresourcegroup/providers/Microsoft.Web/sites/<web-app-west-us> --sdk-auth + ``` ++1. You need to provide your service principal's credentials to the Azure Login action as part of the GitHub Action workflow you'll be using. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option. + 1. Open your GitHub repository and go to **Settings** > **Security** > **Secrets and variables** > **Actions** + 1. Select **New repository secret** and create a secret for each of the following values. The values can be found in the json output you copied earlier. ++ |Name |Value | + |-|| + |AZURE_APP_ID |`<application/client-id>` | + |AZURE_PASSWORD |`<client-secret>` | + |AZURE_TENANT_ID |`<tenant-id>` | + |AZURE_SUBSCRIPTION_ID |`<subscription-id>` | ++#### Create the GitHub Actions workflow ++Now that you have a service principal that can access your App Service apps, edit the default workflows that were created for your apps when you configured continuous deployment. Authentication must be done using your service principal instead of the publish profile. For sample workflows, see the "Service principal" tab in [Deploy to App Service](deploy-github-actions.md#deploy-to-app-service). The following sample workflow can be used for the Node.js sample app that was provided. ++1. Open your app's GitHub repository and go to the `<repo-name>/.github/workflows/` directory. You'll see the autogenerated workflows. +1. For each workflow file, select the "pencil" button in the top right to edit the file. Replace the contents with the following text, which assumes you created the GitHub secrets earlier for your credential. Update the placeholder for `<web-app-name>` under the "env" section, and then commit directly to the main branch. This commit will trigger the GitHub Action to run again and deploy your code, this time using the service principal to authenticate. ++ ```yml ++ name: Build and deploy Node.js app to Azure Web App + + on: + push: + branches: + - main + workflow_dispatch: + + env: + AZURE_WEBAPP_NAME: <web-app-name> # set this to your application's name + NODE_VERSION: '18.x' # set this to the node version to use + AZURE_WEBAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the repository root + AZURE_WEBAPP_SLOT_NAME: stage # set this to your application's slot name + + jobs: + build: + runs-on: ubuntu-latest + + steps: + - uses: actions/checkout@v2 + + - name: Set up Node.js version + uses: actions/setup-node@v1 + with: + node-version: ${{ env.NODE_VERSION }} + + - name: npm install, build + run: | + npm install + npm run build --if-present + + - name: Upload artifact for deployment job + uses: actions/upload-artifact@v2 + with: + name: node-app + path: . + + deploy: + runs-on: ubuntu-latest + needs: build + environment: + name: 'stage' + url: ${{ steps.deploy-to-webapp.outputs.webapp-url }} + + steps: + - name: Download artifact from build job + uses: actions/download-artifact@v2 + with: + name: node-app ++ - uses: azure/login@v1 + with: + creds: | + { + "clientId": "${{ secrets.AZURE_APP_ID }}", + "clientSecret": "${{ secrets.AZURE_PASSWORD }}", + "subscriptionId": "${{ secrets.AZURE_SUBSCRIPTION_ID }}", + "tenantId": "${{ secrets.AZURE_TENANT_ID }}" + } + + - name: 'Deploy to Azure Web App' + id: deploy-to-webapp + uses: azure/webapps-deploy@v2 + with: + app-name: ${{ env.AZURE_WEBAPP_NAME }} + slot-name: ${{ env.AZURE_WEBAPP_SLOT_NAME }} + package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }} + + - name: logout + run: | + az logout + ``` ++### How does slot traffic routing allow me to test updates that I make to my apps? ++Traffic routing with slots allows you to direct a pre-defined portion of your user traffic to each slot. Initially, 100% of traffic is directed to the production site. However, you have the ability, for example, to send 10% of your traffic to your staging slot. If you configure slot traffic routing in this way, when users try to access your app, 10% of them will automatically be routed to the staging slot with no changes to your Front Door instance. To learn more about slot swaps and staging environments in App Service, see [Set up staging environments in Azure App Service](deploy-staging-slots.md). ++### How do I move my code from my staging slot to my production slot? ++Once you're done testing and validating in your staging slots, you can perform a [slot swap](deploy-staging-slots.md#swap-two-slots) from your staging slot to your production site. You'll need to do this swap for all instances of your app in each region. During a slot swap, the App Service platform [ensures the target slot doesn't experience downtime](deploy-staging-slots.md#swap-operation-steps). ++To perform the swap, run the following command for each app. Replace the placeholder for `<web-app-name>`. ++```azurecli-interactive +az webapp deployment slot swap --resource-group MyResourceGroup -name <web-app-name> --slot stage --target-slot production +``` ++After a few minutes, you can navigate to your Front Door's endpoint to validate the slot swap succeeded. ++At this point, your apps are up and running and any changes you make to your application source code will automatically trigger an update to both of your staging slots. You can then repeat the slot swap process when you're ready to move that code into production. ++### How else can I use Azure Front Door in my multi-region deployments? ++If you're concerned about potential disruptions or issues with continuity across regions, as in some customers seeing one version of your app while others see another version, or if you're making significant changes to your apps, you can temporarily remove the site that's undergoing the slot swap from your Front Door's origin group. All traffic will then be directed to the other origin. Navigate to the **Update origin group** pane and **Delete** the origin that is undergoing the change. Once you've made all of your changes and are ready to serve traffic there again, you can return to the same pane and select **+ Add an origin** to readd the origin. +++If you'd prefer to not delete and then readd origins, you can create extra origin groups for your Front Door instance. You can then associate the route to the origin group pointing to the intended origin. For example, you can create two new origin groups, one for your primary region, and one for your secondary region. When your primary region is undergoing a change, associate the route with your secondary region and vice versa when your secondary region is undergoing a change. When all changes are complete, you can associate the route with your original origin group that contains both regions. This method works because a route can only be associated with one origin group at a time. ++To demonstrate working with multiple origins, in the following screenshot, there are three origin groups. "MyOriginGroup" consists of both web apps, and the other two origin groups each consist of the web app in their respective region. In the example, the app in the primary region is undergoing a change. Before that change was started, the route was associated with "MySecondaryRegion" so all traffic would be sent to the app in the secondary region during the change period. You can update the route by selecting "Unassociated", which will bring up the **Associate routes** pane. +++### How do I restrict access to the advanced tools site? ++With Azure App service, the SCM/advanced tools site is used to manage your apps and deploy application source code. Consider [locking down the SCM/advanced tools site](app-service-ip-restrictions.md#restrict-access-to-an-scm-site) since this site will most likely not need to be reached through Front Door. For example, you can set up access restrictions that only allow you to conduct your testing and enable continuous deployment from your tool of choice. If you're using deployment slots, for production slots specifically, you can deny almost all access to the SCM site since your testing and validation will be done with your staging slots. ++## Next steps ++> [!div class="nextstepaction"] +> [How to deploy a highly available multi-region web app](https://azure.github.io/AppService/2022/12/02/multi-region-web-app.html) ++> [!div class="nextstepaction"] +> [Highly available zone-redundant web application](/azure/architecture/reference-architectures/app-service-web-app/zone-redundant) |
attestation | Custom Tcb Baseline Enforcement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/custom-tcb-baseline-enforcement.md | Minimum PSW Windows version: "2.7.101.2" ## TCB baselines available in Azure which can be configured as custom TCB baseline ```- TCB identifier: "11" - TCB evaluation data number": "11" - TCB release date: "2021-06-09T00:00:00" - Minimum PSW Linux version: "2.13.3", - Minimum PSW Windows version: "2.13.100.2" -- TCB identifier: "10" - TCB evaluation data number: "10" - Tcb release date: "2020-11-11T00:00:00" - Minimum PSW Linux version: "2.9", - Minimum PSW Windows version: "2.7.101.2" + TCB identifier : 13 + TCB release date: 8/9/2022 + TCB evaluation data number : 13 + Minimum PSW Linux version : 2.17 + Minimum PSW Windows version : 2.16 + + TCB identifier : 12 + TCB release date: 11/10/2021 + TCB evaluation data number : 12 + Minimum PSW Linux version : 2.13.3 + Minimum PSW Windows version : 2.13.100.2 + + TCB identifier : 11 + TCB release date: 6/8/2021 + TCB evaluation data number : 11 + Minimum PSW Linux version : 2.13.3 + Minimum PSW Windows version : 2.13.100.2 + + TCB identifier : 10 + TCB release date: 11/10/2020 + TCB evaluation data number : 10 + Minimum PSW Linux version : 2.9 + Minimum PSW Windows version : 2.7.101.2 ``` ## How to configure an attestation policy with custom TCB baseline using Azure portal experience |
azure-arc | Conceptual Gitops Flux2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-flux2.md | Title: "GitOps Flux v2 configurations with AKS and Azure Arc-enabled Kubernetes" description: "This article provides a conceptual overview of GitOps in Azure for use in Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters." Previously updated : 02/02/2023 Last updated : 02/07/2023 GitOps is enabled in an Azure Arc-enabled Kubernetes or AKS cluster as a `Micros ### Version support -The most recent version of the Flux v2 extension and the two previous versions (N-2) are supported. We generally recommend that you use the most recent version of the extension. +The most recent version of the Flux v2 extension and the two previous versions (N-2) are supported. We generally recommend that you use the [most recent version](extensions-release.md#flux-gitops) of the extension. > [!NOTE] > Eventually Azure will stop supporting GitOps with Flux v1, so we recommend [migrating to Flux v2](#migrate-from-flux-v1) as soon as possible. az k8s-extension update --configuration-settings multiTenancy.enforce=false -c C ## Migrate from Flux v1 -If you've been using Flux v1 in Azure Arc-enabled Kubernetes or AKS clusters and want to migrate to using Flux v2 in the same clusters, you first need to delete the Flux v1 `sourceControlConfigurations` from the clusters. The `microsoft.flux` cluster extension won't install if there are Flux v1 `sourceControlConfigurations` resources in the cluster. +If you are still using Flux v1, we recommend migrating to Flux v2 as soon as possible. ++To migrate to using Flux v2 in the same clusters where you've been using Flux v1, you first need to delete all Flux v1 `sourceControlConfigurations` from the clusters. Because Flux v2 has a fundamentally different architecture, the `microsoft.flux` cluster extension won't install if there are Flux v1 `sourceControlConfigurations` resources in a cluster. ++Removing Flux v1 `sourceControlConfigurations` will not stop any applications that are running on the clusters. However, during the period when Flux v1 configuration is removed and Flux v2 extension is not yet fully deployed, expect that: ++* If there are new changes in the application manifests stored in a Git repository, these will not be pulled during the migration, and the application version deployed on the cluster will be stale. +* If there are unintended changes in the cluster state and it deviates from the desired state specified in source Git repository, the cluster will not be able to self-heal. ++We recommend testing your migration scenario in a development environment before migrating your production environment. The process of removing Flux v1 configurations and deploying Flux v2 configurations should not take more than 30 minutes. ++### View and delete Flux v1 configurations Use these Azure CLI commands to find and then delete existing `sourceControlConfigurations` in a cluster: az k8s-configuration list --cluster-name <Arc or AKS cluster name> --cluster-typ az k8s-configuration delete --name <configuration name> --cluster-name <Arc or AKS cluster name> --cluster-type <connectedClusters OR managedClusters> --resource-group <resource group name> ``` -You can also use the Azure portal to view and delete existing GitOps configurations in Azure Arc-enabled Kubernetes or AKS clusters. +You can also view and delete existing GitOps configurations for a cluster in the Azure portal. To do so, navigate to the cluster where the configuration was created and select **GitOps** in the left pane. Select the configuration, then select **Delete**. ++### Deploy Flux v2 configurations ++Use the Azure portal or Azure CLI to [apply Flux v2 configurations](tutorial-use-gitops-flux2.md#apply-a-flux-configuration) to your clusters. ++### Flux v1 retirement information ++The open-source project of Flux v1 has been archived, and feature development has been stopped indefinitely. For more information, see the [fluxcd project](https://fluxcd.io/docs/migration/). ++Flux v2 was launched as the upgraded open-source project of Flux. It has a new architecture and supports more GitOps use cases. Microsoft launched a version of an extension using Flux v2 in May 2022. Since then, customers have been advised to move to Flux v2 within three years, as support for using Flux v1 is scheduled to end in May 2025. -More information about migration from Flux v1 to Flux v2 is available in the fluxcd project: [Migrate from Flux v1 to v2](https://fluxcd.io/docs/migration/). +Key new features introduced in the GitOps extension for Flux v2: +* Flux v1 is a monolithic do-it-all operator. Flux v2 separates the functionalities into [specialized controllers](#controllers) (Source controller, Kustomize controller, Helm controller, and Notification controller). +* Supports synchronization with multiple source repositories. +* Supports [multi-tenancy](#multi-tenancy), like applying each source repository with its own set of permissions +* Provides operational insights through health checks, events and alerts. +* Supports Git branches, pinning on commits and tags, and following SemVer tag ranges. +* Credentials configuration per GitRepository resource: SSH private key, HTTP/S username/password/token, and OpenPGP public keys. ## Next steps |
azure-arc | Tutorial Use Gitops Flux2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md | Title: "Tutorial: Deploy applications using GitOps with Flux v2" description: "This tutorial shows how to use GitOps with Flux v2 to manage configuration and application deployment in Azure Arc and AKS clusters." Previously updated : 12/19/2022 Last updated : 02/08/2023 az k8s-configuration flux create -g flux-demo-rg \ --scope cluster \ -u https://github.com/Azure/gitops-flux2-kustomize-helm-mt \ --branch main \kustomization name=infra path=./infrastructure prune=true \kustomization name=apps path=./apps/staging prune=true dependsOn=\["infra"\]+--kustomization-name=infra path=./infrastructure prune=true \ +--kustomization-name=apps path=./apps/staging prune=true dependsOn=\["infra"\] ``` The `microsoft.flux` extension will be installed on the cluster (if it hasn't already been installed due to a previous GitOps deployment). |
azure-arc | Deploy Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/deploy-cli.md | +- [AKS on HCI (AKS hybrid) - Arc resource bridge deployment](/azure/aks/hybrid/deploy-arc-resource-bridge-windows-server) This topic provides an overview of the [Azure CLI commands](/cli/azure/arcappliance) that are used to manage Arc resource bridge (preview) deployment, in the order in which they are typically used for deployment. |
azure-functions | Functions Bindings Triggers Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-triggers-python.md | Durable Functions also provides preview support of the V2 programming model. To > [!NOTE]-> Using [Extension Bundles](/azure-functions/functions-bindings-register#extension-bundles) is not currently supported when trying out the Python V2 programming model with Durable Functions, so you will need to manage your extensions manually. -> To do this, remove the `extensionBundles` section of your `host.json` as described [here](/azure-functions/functions-bindings-register#extension-bundles) and run `func extensions install --package Microsoft.Azure.WebJobs.Extensions.DurableTask --version 2.9.1` on your terminal. This will install the Durable Functions extension for your app and will allow you to try out the new experience. +> Using [Extension Bundles](./functions-bindings-register.md#extension-bundles) is not currently supported when trying out the Python V2 programming model with Durable Functions, so you will need to manage your extensions manually. +> To do this, remove the `extensionBundles` section of your `host.json` as described [here](./functions-run-local.md#install-extensions) and run `func extensions install --package Microsoft.Azure.WebJobs.Extensions.DurableTask --version 2.9.1` on your terminal. This will install the Durable Functions extension for your app and will allow you to try out the new experience. The Durable Functions Triggers and Bindings may be accessed from an instance `DFApp`, a subclass of `FunctionApp` that additionally exports Durable Functions-specific decorators. |
azure-monitor | Proactive Trace Severity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-trace-severity.md | Traces are widely used in applications, and they help tell the story of what hap It's normal to expect some level of ΓÇ£BadΓÇ¥ traces because of any number of reasons, such as transient network issues. But when a real problem begins growing, it usually manifests as an increase in the relative proportion of ΓÇ£badΓÇ¥ traces vs ΓÇ£goodΓÇ¥ traces. Smart detection automatically analyzes the trace telemetry that your application logs, and can warn you about unusual patterns in their severity. -This feature requires no special setup, other than configuring trace logging for your app. See how to configure a trace log listener for [.NET](../app/asp-net-trace-logs.md) or [Java](../app/java-in-process-agent.md). It's active when your app generates enough trace telemetry. +This feature requires no special setup, other than configuring trace logging for your app. See how to configure a trace log listener for [.NET](../app/asp-net-trace-logs.md) or [Java](../app/opentelemetry-enable.md?tabs=java). It's active when your app generates enough trace telemetry. ## When would I get this type of smart detection notification? You get this type of notification if the ratio between ΓÇ£goodΓÇ¥ traces (traces logged with a level of *Info* or *Verbose*) and ΓÇ£badΓÇ¥ traces (traces logged with a level of *Warning*, *Error*, or *Fatal*) is degrading in a specific day, compared to a baseline calculated over the previous seven days. |
azure-monitor | Api Custom Events Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md | If you don't have a reference on Application Insights SDK yet: * [ASP.NET project](./asp-net.md) * [ASP.NET Core project](./asp-net-core.md)- * [Java project](./java-in-process-agent.md) + * [Java project](./opentelemetry-enable.md?tabs=java) * [Node.js project](./nodejs.md) * [JavaScript in each webpage](./javascript.md) * In your device or web server code, include: catch (ex) The SDKs catch many exceptions automatically, so you don't always have to call `TrackException` explicitly: * **ASP.NET**: [Write code to catch exceptions](./asp-net-exceptions.md).-* **Java EE**: [Exceptions are caught automatically](./java-in-process-agent.md). +* **Java EE**: [Exceptions are caught automatically](./opentelemetry-enable.md?tabs=java). * **JavaScript**: Exceptions are caught automatically. If you want to disable automatic collection, add a line to the code snippet that you insert in your webpages: ```javascript Use `TrackTrace` to help diagnose problems by sending a "breadcrumb trail" to Ap In .NET [Log adapters](./asp-net-trace-logs.md), use this API to send third-party logs to the portal. -In Java, the [Application Insights Java agent](java-in-process-agent.md) autocollects and sends logs to the portal. +In Java, the [Application Insights Java agent](opentelemetry-enable.md?tabs=java) autocollects and sends logs to the portal. *C#* finally Remember that the server SDKs include a [dependency module](./asp-net-dependencies.md) that discovers and tracks certain dependency calls automatically, for example, to databases and REST APIs. You have to install an agent on your server to make the module work. In Java, many dependency calls can be automatically tracked by using the-[Application Insights Java agent](java-in-process-agent.md). +[Application Insights Java agent](opentelemetry-enable.md?tabs=java). You use this call if you want to track calls that the automated tracking doesn't catch. |
azure-monitor | App Insights Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md | A preview [Open Telemetry](opentelemetry-enable.md?tabs=net) offering is also av Integrated Auto-Instrumentation is available for Java Apps hosted on [Azure App Service](azure-web-apps-java.md) and [Azure Functions](monitor-functions.md). -Auto-instrumentation is available for any environment using [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](java-in-process-agent.md). +Auto-instrumentation is available for any environment using [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](opentelemetry-enable.md?tabs=java). ### [Node.js](#tab/nodejs) A preview [Open Telemetry](opentelemetry-enable.md?tabs=python) offering is also This section outlines supported scenarios. * [C#|VB (.NET)](./asp-net.md)-* [Java](./java-in-process-agent.md) +* [Java](./opentelemetry-enable.md?tabs=java) * [JavaScript](./javascript.md) * [Node.js](./nodejs.md) * [Python](./opencensus-python.md) Supported platforms and frameworks are listed here. #### Auto-instrumentation (enable without code changes) * [ASP.NET - for web apps hosted with IIS](./status-monitor-v2-overview.md) * [ASP.NET Core - for web apps hosted with IIS](./status-monitor-v2-overview.md)-* [Java](./java-in-process-agent.md) +* [Java](./opentelemetry-enable.md?tabs=java) #### Manual instrumentation / SDK (some code changes required) * [ASP.NET](./asp-net.md) Supported platforms and frameworks are listed here. ### Logging frameworks * [ILogger](./ilogger.md) * [Log4Net, NLog, or System.Diagnostics.Trace](./asp-net-trace-logs.md)-* [Log4J, Logback, or java.util.logging](./java-in-process-agent.md#autocollected-logs) +* [Log4J, Logback, or java.util.logging](./opentelemetry-enable.md?tabs=java#logs) * [LogStash plug-in](https://github.com/Azure/azure-diagnostics-tools/tree/master/Logstash/logstash-output-applicationinsights) * [Azure Monitor](/archive/blogs/msoms/application-insights-connector-in-oms) |
azure-monitor | Asp Net Dependencies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-dependencies.md | Below is the currently supported list of dependency calls that are automatically ### Java See the list of Application Insights Java's-[autocollected dependencies](java-in-process-agent.md#autocollected-dependencies). +[autocollected dependencies](opentelemetry-enable.md?tabs=java#distributed-tracing). ### Node.js A list of the latest [currently supported modules](https://github.com/microsoft/ * [Exceptions](./asp-net-exceptions.md) * [User and page data](./javascript.md) * [Availability](./monitor-web-app-availability.md)-* Set up custom dependency tracking for [Java](java-in-process-agent.md#add-spans-by-using-the-opentelemetry-annotation). +* Set up custom dependency tracking for [Java](opentelemetry-enable.md?tabs=java#add-custom-spans). * Set up custom dependency tracking for [OpenCensus Python](./opencensus-python-dependency.md). * [Write custom dependency telemetry](./api-custom-events-metrics.md#trackdependency) * See [data model](./data-model.md) for Application Insights types and data model. |
azure-monitor | Asp Net Exceptions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-exceptions.md | To have exceptions reported from your server-side application, consider the foll * Add the [Application Insights Extension](./azure-web-apps.md) for Azure web apps. * Add the [Application Monitoring Extension](./azure-vm-vmss-apps.md) for Azure Virtual Machines and Azure Virtual Machine Scale Sets IIS-hosted apps.- * Install [Application Insights SDK](./asp-net.md) in your app code, run [Application Insights Agent](./status-monitor-v2-overview.md) for IIS web servers, or enable the [Java agent](./java-in-process-agent.md) for Java web apps. + * Install [Application Insights SDK](./asp-net.md) in your app code, run [Application Insights Agent](./status-monitor-v2-overview.md) for IIS web servers, or enable the [Java agent](./opentelemetry-enable.md?tabs=java) for Java web apps. ### Client side |
azure-monitor | Asp Net Trace Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-trace-logs.md | System.Diagnostics.Tracing has an [Autoflush feature](/dotnet/api/system.diagnos ### How do I do this for Java? -In Java codeless instrumentation, which is recommended, the logs are collected out of the box. Use [Java 3.0 agent](./java-in-process-agent.md). +In Java codeless instrumentation, which is recommended, the logs are collected out of the box. Use [Java 3.0 agent](./opentelemetry-enable.md?tabs=java). The Application Insights Java agent collects logs from Log4j, Logback, and java.util.logging out of the box. |
azure-monitor | Azure Ad Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-ad-authentication.md | appInsights.defaultClient.config.aadTokenCredential = credential; > [!NOTE] > Support for Azure AD in the Application Insights Java agent is included starting with [Java 3.2.0-BETA](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.2.0-BETA). -1. [Configure your application with the Java agent](java-in-process-agent.md#get-started). +1. [Configure your application with the Java agent.](opentelemetry-enable.md?tabs=java#get-started) > [!IMPORTANT] > Use the full connection string, which includes `IngestionEndpoint`, when you configure your app with the Java agent. For example, use `InstrumentationKey=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX;IngestionEndpoint=https://XXXX.applicationinsights.azure.com/`. |
azure-monitor | Azure Vm Vmss Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-vm-vmss-apps.md | The Application Insights Agent autocollects the same dependency signals out of t ### [Java](#tab/Java) -We recommend the [Application Insights Java 3.0 agent](./java-in-process-agent.md) for Java. The most popular libraries, frameworks, logs, and dependencies are [autocollected](./java-in-process-agent.md#autocollected-requests), along with many [other configurations](./java-standalone-config.md). +We recommend the [Application Insights Java 3.0 agent](./opentelemetry-enable.md?tabs=java) for Java. The most popular libraries, frameworks, logs, and dependencies are [autocollected](./java-in-process-agent.md#autocollected-requests), along with many [other configurations](./java-standalone-config.md). ### [Node.js](#tab/nodejs) |
azure-monitor | Azure Web Apps Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-java.md | Monitoring of your Java web applications running on [Azure App Services](../../a The recommended way to enable application monitoring for Java applications running on Azure App Services is through Azure portal. Turning on application monitoring in Azure portal will automatically instrument your application with Application Insights, and doesn't require any code changes.-You can apply additional configurations, and then based on your specific scenario you [add your own custom telemetry](./java-in-process-agent.md#modify-telemetry) if needed. +You can apply additional configurations, and then based on your specific scenario you [add your own custom telemetry](./opentelemetry-enable.md?tabs=java#modify-telemetry) if needed. ### Auto-instrumentation through Azure portal -You can turn on monitoring for your Java apps running in Azure App Service just with one click, no code change required. The integration adds [Application Insights Java 3.x](./java-in-process-agent.md) and you will get the telemetry auto-collected. +You can turn on monitoring for your Java apps running in Azure App Service just with one click, no code change required. The integration adds [Application Insights Java 3.x](./opentelemetry-enable.md?tabs=java) and you will get the telemetry auto-collected. For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers). Below is our step-by-step troubleshooting guide for Java-based applications runn 1. Sometimes the latest version of the Application Insights Java agent is not available in App Service - it takes a couple of months for the latest versions to roll out to all regions. In case you need the latest version of Java agent to monitor your app in App Service, you can upload the agent manually: * Upload the Java agent jar file to App Service * Get the latest version of [Azure CLI](/cli/azure/install-azure-cli-windows?tabs=azure-cli)- * Get the latest version of [Application Insights Java agent](./java-in-process-agent.md) + * Get the latest version of [Application Insights Java agent](./opentelemetry-enable.md?tabs=java) * Deploy Java agent to App Service - a sample command to deploy the Java agent jar: `az webapp deploy --src-path applicationinsights-agent-{VERSION_NUMBER}.jar --target-path jav?tabs=javase&pivots=platform-linux#3configure-the-maven-plugin) to deploy through Maven plugin * Once the agent jar file is uploaded, go to App Service configurations and add a new environment variable, JAVA_OPTS, and set its value to `-javaagent:D:/home/{PATH_TO_THE_AGENT_JAR}/applicationinsights-agent-{VERSION_NUMBER}.jar` * Disable Application Insights via Application Insights tab |
azure-monitor | Azure Web Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps.md | There are two ways to enable monitoring for applications hosted on App Service: * **Manually instrumenting the application through code** by installing the Application Insights SDK. - This approach is much more customizable, but it requires the following approaches: SDK for [.NET Core](./asp-net-core.md), [.NET](./asp-net.md), [Node.js](./nodejs.md), [Python](./opencensus-python.md), and a standalone agent for [Java](./java-in-process-agent.md). This method also means you must manage the updates to the latest version of the packages yourself. + This approach is much more customizable, but it requires the following approaches: SDK for [.NET Core](./asp-net-core.md), [.NET](./asp-net.md), [Node.js](./nodejs.md), [Python](./opencensus-python.md), and a standalone agent for [Java](./opentelemetry-enable.md?tabs=java). This method also means you must manage the updates to the latest version of the packages yourself. If you need to make custom API calls to track events/dependencies not captured by default with auto-instrumentation monitoring, you'll need to use this method. To learn more, see [Application Insights API for custom events and metrics](./api-custom-events-metrics.md). |
azure-monitor | Codeless Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md | Links are provided to more information for each supported scenario. |Azure Functions - basic | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | |Azure Functions - dependencies | :x: | :x: | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[2](#Preview)</sup> | :x: | [ :white_check_mark: :link: ](monitor-functions.md#distributed-tracing-for-python-function-apps) | |Azure Spring Cloud | :x: | :x: | [ :white_check_mark: :link: ](azure-web-apps-java.md) | :x: | :x: |-|Azure Kubernetes Service (AKS) | :x: | :x: | [ :white_check_mark: :link: ](java-in-process-agent.md) | :x: | :x: | -|Azure VMs Windows | [ :white_check_mark: :link: ](azure-vm-vmss-apps.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](azure-vm-vmss-apps.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](java-in-process-agent.md) | :x: | :x: | -|On-premises VMs Windows | [ :white_check_mark: :link: ](status-monitor-v2-overview.md) <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](status-monitor-v2-overview.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](java-in-process-agent.md) | :x: | :x: | -|Standalone agent - any environment | :x: | :x: | [ :white_check_mark: :link: ](java-in-process-agent.md) | :x: | :x: | +|Azure Kubernetes Service (AKS) | :x: | :x: | [ :white_check_mark: :link: ](opentelemetry-enable.md?tabs=java) | :x: | :x: | +|Azure VMs Windows | [ :white_check_mark: :link: ](azure-vm-vmss-apps.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](azure-vm-vmss-apps.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](opentelemetry-enable.md?tabs=java) | :x: | :x: | +|On-premises VMs Windows | [ :white_check_mark: :link: ](status-monitor-v2-overview.md) <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](status-monitor-v2-overview.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](opentelemetry-enable.md?tabs=java) | :x: | :x: | +|Standalone agent - any environment | :x: | :x: | [ :white_check_mark: :link: ](opentelemetry-enable.md?tabs=java) | :x: | :x: | **Footnotes** - <a name="OnBD">1</a>: Application Insights is on by default and enabled automatically. |
azure-monitor | Correlation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/correlation.md | The Application Insights .NET SDK uses `DiagnosticSource` and `Activity` to coll <a name="java-correlation"></a> ## Telemetry correlation in Java -[Java agent](./java-in-process-agent.md) supports automatic correlation of telemetry. It automatically populates `operation_id` for all telemetry (like traces, exceptions, and custom events) issued within the scope of a request. It also propagates the correlation headers that were described earlier for service-to-service calls via HTTP, if the [Java SDK agent](deprecated-java-2x.md#monitor-dependencies-caught-exceptions-and-method-execution-times-in-java-web-apps) is configured. +[Java agent](./opentelemetry-enable.md?tabs=java) supports automatic correlation of telemetry. It automatically populates `operation_id` for all telemetry (like traces, exceptions, and custom events) issued within the scope of a request. It also propagates the correlation headers that were described earlier for service-to-service calls via HTTP, if the [Java SDK agent](deprecated-java-2x.md#monitor-dependencies-caught-exceptions-and-method-execution-times-in-java-web-apps) is configured. > [!NOTE] > Application Insights Java agent autocollects requests and dependencies for JMS, Kafka, Netty/Webflux, and more. For Java SDK, only calls made via Apache HttpClient are supported for the correlation feature. Automatic context propagation across messaging technologies like Kafka, RabbitMQ, and Azure Service Bus isn't supported in the SDK. |
azure-monitor | Create Workspace Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-workspace-resource.md | For information on how to set up an Application Insights SDK for code-based moni - [ASP.NET Core](./asp-net-core.md) - [Background tasks and modern console applications (.NET/.NET Core)](./worker-service.md) - [Classic console applications (.NET)](./console.md)-- [Java](./java-in-process-agent.md)+- [Java](./opentelemetry-enable.md?tabs=java) - [JavaScript](./javascript.md) - [Node.js](./nodejs.md) - [Python](./opencensus-python.md) |
azure-monitor | Data Model Dependency Telemetry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-dependency-telemetry.md | Indication of successful or unsuccessful call. ## Next steps - Set up dependency tracking for [.NET](./asp-net-dependencies.md).-- Set up dependency tracking for [Java](./java-in-process-agent.md).+- Set up dependency tracking for [Java](./opentelemetry-enable.md?tabs=java). - [Write custom dependency telemetry](./api-custom-events-metrics.md#trackdependency) - See [data model](data-model.md) for Application Insights types and data model. - Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights. |
azure-monitor | Data Model Trace Telemetry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-trace-telemetry.md | Trace severity level. ## Next steps - Explore [.NET trace logs in Application Insights](./asp-net-trace-logs.md).-- Explore [Java trace logs in Application Insights](./java-in-process-agent.md#autocollected-logs).+- Explore [Java trace logs in Application Insights](./opentelemetry-enable.md?tabs=java#logs). - See [data model](data-model.md) for Application Insights types and data model. - Write [custom trace telemetry](./api-custom-events-metrics.md#tracktrace). - Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights. |
azure-monitor | Data Retention Privacy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-retention-privacy.md | There are three sources of data: * Each SDK has many [modules](./configuration-with-applicationinsights-config.md), which use different techniques to collect different types of telemetry. * If you install the SDK in development, you can use its API to send your own telemetry, in addition to the standard modules. This custom telemetry can include any data you want to send.-* In some web servers, there are also agents that run alongside the app and send telemetry about CPU, memory, and network occupancy. For example, Azure VMs, Docker hosts, and [Java application servers](./java-in-process-agent.md) can have such agents. +* In some web servers, there are also agents that run alongside the app and send telemetry about CPU, memory, and network occupancy. For example, Azure VMs, Docker hosts, and [Java application servers](./opentelemetry-enable.md?tabs=java) can have such agents. * [Availability tests](./monitor-web-app-availability.md) are processes run by Microsoft that send requests to your web app at regular intervals. The results are sent to Application Insights. ### What kind of data is collected? This product includes GeoLite2 data created by [MaxMind](https://www.maxmind.com [client]: ./javascript.md [config]: ./configuration-with-applicationinsights-config.md [greenbrown]: ./asp-net.md-[java]: ./java-in-process-agent.md +[java]: ./opentelemetry-enable.md?tabs=java [platforms]: ./app-insights-overview.md#supported-languages [pricing]: https://azure.microsoft.com/pricing/details/application-insights/ [redfield]: ./status-monitor-v2-overview.md |
azure-monitor | Deprecated Java 2X | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/deprecated-java-2x.md | -> Documentation for the latest version can be found at [Application Insights Java 3.x](./java-in-process-agent.md). +> Documentation for the latest version can be found at [Application Insights Java 3.x](./opentelemetry-enable.md?tabs=java). In this article, you'll learn how to use Application Insights Java 2.x. This article shows you how to: If your project is already set up to use Gradle for build, merge the following c * `applicationinsights-core` gives you the bare API, for example, if your application isn't servlet-based. * How should I update the SDK to the latest version?- * As of November 2020, for monitoring Java applications, we recommend using Application Insights Java 3.x. For more information on how to get started, see [Application Insights Java 3.x](./java-in-process-agent.md). + * As of November 2020, for monitoring Java applications, we recommend using Application Insights Java 3.x. For more information on how to get started, see [Application Insights Java 3.x](./opentelemetry-enable.md?tabs=java). ### Add an ApplicationInsights.xml file Add *ApplicationInsights.xml* to the resources folder in your project, or make sure it's added to your project's deployment class path. Copy the following XML into it. Now publish your app to the server, let people use it, and watch the telemetry s ### Azure App Service, Azure Kubernetes Service, VMs config -The best and easiest approach to monitor your applications running on any Azure resource providers is to use [Application Insights Java 3.x](./java-in-process-agent.md). +The best and easiest approach to monitor your applications running on any Azure resource providers is to use [Application Insights Java 3.x](./opentelemetry-enable.md?tabs=java). ### Exceptions and request failures Unhandled exceptions and request failures are automatically collected by the Application Insights web filter. Add the following binding code to the configuration file: [usage]: javascript.md [eclipse]: app-insights-java-eclipse.md [java]: #get-started-with-application-insights-in-a-java-web-project-[javaagent]: java-in-process-agent.md +[javaagent]: opentelemetry-enable.md?tabs=java |
azure-monitor | Devops | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/devops.md | Find out more about Application Insights: Getting started with Application Insights is easy. The main options are: * Use [IIS servers](./status-monitor-v2-overview.md).-* Instrument your project during development. You can do it for [ASP.NET](./asp-net.md) or [Java](./java-in-process-agent.md) apps, [Node.js](./nodejs.md), and a host of [other types](./app-insights-overview.md#supported-languages). +* Instrument your project during development. You can do it for [ASP.NET](./asp-net.md) or [Java](./opentelemetry-enable.md?tabs=java) apps, [Node.js](./nodejs.md), and a host of [other types](./app-insights-overview.md#supported-languages). * Instrument [any webpage](./javascript.md) by adding a short code snippet. |
azure-monitor | Diagnostic Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/diagnostic-search.md | The first time you do this step, you're asked to configure a link to your Azure In addition to the out-of-the-box telemetry sent by Application Insights SDK, you can: -* Capture log traces from your favorite logging framework in [.NET](./asp-net-trace-logs.md) or [Java](./java-in-process-agent.md#autocollected-logs). This means you can search through your log traces and correlate them with page views, exceptions, and other events. +* Capture log traces from your favorite logging framework in [.NET](./asp-net-trace-logs.md) or [Java](./opentelemetry-enable.md?tabs=java#logs). This means you can search through your log traces and correlate them with page views, exceptions, and other events. * [Write code](./api-custom-events-metrics.md) to send custom events, page views, and exceptions. |
azure-monitor | Distributed Tracing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/distributed-tracing.md | The Application Insights agents and SDKs for .NET, .NET Core, Java, Node.js, and * [.NET](asp-net.md) * [.NET Core](asp-net-core.md)-* [Java](./java-in-process-agent.md) +* [Java](./opentelemetry-enable.md?tabs=java) * [Node.js](../app/nodejs.md) * [JavaScript](./javascript.md#enable-distributed-tracing) * [Python](opencensus-python.md) A complete observability story includes all three pillars, but currently our [Az The following pages consist of language-by-language guidance to enable and configure Microsoft's OpenTelemetry-based offerings. Importantly, we share the available functionality and limitations of each offering so you can determine whether OpenTelemetry is right for your project. * [.NET](opentelemetry-enable.md?tabs=net)-* [Java](java-in-process-agent.md) +* [Java](opentelemetry-enable.md?tabs=java) * [Node.js](opentelemetry-enable.md?tabs=nodejs) * [Python](opentelemetry-enable.md?tabs=python) |
azure-monitor | Get Metric | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/get-metric.md | Throttling is a concern as it can lead to missed alerts. The condition to trigge In summary `GetMetric()` is the recommended approach since it does pre-aggregation, it accumulates values from all the Track() calls and sends a summary/aggregate once every minute. `GetMetric()` can significantly reduce the cost and performance overhead by sending fewer data points, while still collecting all relevant information. > [!NOTE]-> Only the .NET and .NET Core SDKs have a GetMetric() method. If you are using Java, see [sending custom metrics using micrometer](./java-in-process-agent.md#send-custom-metrics-by-using-micrometer). For JavaScript and Node.js you would still use `TrackMetric()`, but keep in mind the caveats that were outlined in the previous section. For Python you can use [OpenCensus.stats](./opencensus-python.md#metrics) to send custom metrics but the metrics implementation is different. +> Only the .NET and .NET Core SDKs have a GetMetric() method. If you are using Java, see [sending custom metrics using micrometer](./java-standalone-config.md#auto-collected-micrometer-metrics-including-spring-boot-actuator-metrics). For JavaScript and Node.js you would still use `TrackMetric()`, but keep in mind the caveats that were outlined in the previous section. For Python you can use [OpenCensus.stats](./opencensus-python.md#metrics) to send custom metrics but the metrics implementation is different. ## Getting started with GetMetric |
azure-monitor | Java In Process Agent Redirect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent-redirect.md | -For more information, see [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](java-in-process-agent.md#azure-monitor-opentelemetry-based-auto-instrumentation-for-java-applications). +For more information, see [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](opentelemetry-enable.md?tabs=java#enable-azure-monitor-opentelemetry-for-net-nodejs-python-and-java-applications). For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers). ## Next steps -- [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](java-in-process-agent.md#azure-monitor-opentelemetry-based-auto-instrumentation-for-java-applications)+- [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](opentelemetry-enable.md?tabs=java#enable-azure-monitor-opentelemetry-for-net-nodejs-python-and-java-applications) |
azure-monitor | Java In Process Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md | - Title: Azure Monitor Application Insights Java -description: Application performance monitoring for Java applications running in any environment without requiring code modification. The article also discusses distributed tracing and the application map. - Previously updated : 01/18/2023-----# Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications --This article describes how to enable and configure the OpenTelemetry-based Azure Monitor Java offering. It can be used for any environment, including on-premises. After you finish the instructions in this article, you can use Azure Monitor Application Insights to monitor your application. ---## Get started --Java auto-instrumentation is enabled through configuration changes. No code changes are required. --### Prerequisites --You need: --- A Java application using Java 8+.-- An Azure subscription: [Create an Azure subscription for free](https://azure.microsoft.com/free/).-- An Application Insights resource: [Create an Application Insights resource](create-workspace-resource.md#create-a-workspace-based-resource).--### Enable Azure Monitor Application Insights --This section shows you how to download the auto-instrumentation jar file. --#### Download the jar file --Download the [applicationinsights-agent-3.4.9.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.9/applicationinsights-agent-3.4.9.jar) file. --> [!WARNING] -> -> If you are upgrading from an earlier 3.x version, you may be impacted by changing defaults or slight differences in the data we collect. See the migration notes at the top of the release notes for -> [3.4.0](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.4.0), -> [3.3.0](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.3.0), -> [3.2.0](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.2.0), and -> [3.1.0](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.1.0) -> for more details. --#### Point the JVM to the jar file --Add `-javaagent:"path/to/applicationinsights-agent-3.4.9.jar"` to your application's JVM args. --> [!TIP] -> For help with configuring your application's JVM args, see [Tips for updating your JVM args](./java-standalone-arguments.md). --If you develop a Spring Boot application, you can replace the JVM argument by a programmatic configuration. For more information, see [Using Azure Monitor Application Insights with Spring Boot](./java-spring-boot.md). --#### Set the Application Insights connection string --1. There are two ways you can point the jar file to your Application Insights resource: -- - Set an environment variable: - - ```console - APPLICATIONINSIGHTS_CONNECTION_STRING=<Copy connection string from Application Insights Resource Overview> - ``` -- - Create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.4.9.jar` with the following content: -- ```json - { - "connectionString": "Copy connection string from Application Insights Resource Overview" - } - ``` --1. Find the connection string on your Application Insights resource. -- :::image type="content" source="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png" alt-text="Screenshot that shows Application Insights overview and connection string." lightbox="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png"::: --#### Confirm data is flowing --Run your application and open your **Application Insights Resource** tab in the Azure portal. It can take a few minutes for data to show up in the portal. --> [!NOTE] -> If you can't run the application or you aren't getting data as expected, see the [Troubleshooting](#troubleshooting) section. ---> [!IMPORTANT] -> If you have two or more services that emit telemetry to the same Application Insights resource, you're required to [set cloud role names](java-standalone-config.md#cloud-role-name) to represent them properly on the application map. --As part of using Application Insights instrumentation, we collect and send diagnostic data to Microsoft. This data helps us run and improve Application Insights. You can disable nonessential data collection. To learn more, see [Statsbeat in Azure Application Insights](./statsbeat.md). --## Configuration options --In the *applicationinsights.json* file, you can also configure these settings: --* Cloud role name -* Cloud role instance -* Sampling -* JMX metrics -* Custom dimensions -* Telemetry processors (preview) -* Autocollected logging -* Autocollected Micrometer metrics, including Spring Boot Actuator metrics -* Heartbeat -* HTTP proxy -* Self-diagnostics --For more information, see [Configuration options](./java-standalone-config.md). --## Auto-instrumentation --Java 3.x includes the following auto-instrumentation. --### Autocollected requests --* JMS consumers -* Kafka consumers -* Netty -* Quartz -* Servlets -* Spring scheduling --> [!NOTE] -> Servlet and Netty auto-instrumentation covers the majority of Java HTTP services, including Java EE, Jakarta EE, Spring Boot, Quarkus, and Micronaut. --### Autocollected dependencies --Autocollected dependencies plus downstream distributed trace propagation: --* Apache HttpClient -* Apache HttpAsyncClient -* AsyncHttpClient -* Google HttpClient -* gRPC -* java.net.HttpURLConnection -* Java 11 HttpClient -* JAX-RS client -* Jetty HttpClient -* JMS -* Kafka -* Netty client -* OkHttp --Autocollected dependencies without downstream distributed trace propagation: --* Cassandra -* JDBC -* MongoDB (async and sync) -* Redis (Lettuce and Jedis) --### Autocollected logs --* Logback (including MDC properties) -* Log4j (including MDC/Thread Context properties) -* JBoss Logging (including MDC properties) -* java.util.logging --### Autocollected metrics --* Micrometer, including Spring Boot Actuator metrics -* JMX Metrics --### Azure SDKs --Telemetry emitted by these Azure SDKs is automatically collected by default: --* [Azure App Configuration](/java/api/overview/azure/data-appconfiguration-readme) 1.1.10+ -* [Azure Cognitive Search](/java/api/overview/azure/search-documents-readme) 11.3.0+ -* [Azure Communication Chat](/java/api/overview/azure/communication-chat-readme) 1.0.0+ -* [Azure Communication Common](/java/api/overview/azure/communication-common-readme) 1.0.0+ -* [Azure Communication Identity](/java/api/overview/azure/communication-identity-readme) 1.0.0+ -* [Azure Communication Phone Numbers](/java/api/overview/azure/communication-phonenumbers-readme) 1.0.0+ -* [Azure Communication SMS](/java/api/overview/azure/communication-sms-readme) 1.0.0+ -* [Azure Cosmos DB](/java/api/overview/azure/cosmos-readme) 4.22.0+ -* [Azure Digital Twins - Core](/java/api/overview/azure/digitaltwins-core-readme) 1.1.0+ -* [Azure Event Grid](/java/api/overview/azure/messaging-eventgrid-readme) 4.0.0+ -* [Azure Event Hubs](/java/api/overview/azure/messaging-eventhubs-readme) 5.6.0+ -* [Azure Event Hubs - Azure Blob Storage Checkpoint Store](/java/api/overview/azure/messaging-eventhubs-checkpointstore-blob-readme) 1.5.1+ -* [Azure Form Recognizer](/java/api/overview/azure/ai-formrecognizer-readme) 3.0.6+ -* [Azure Identity](/java/api/overview/azure/identity-readme) 1.2.4+ -* [Azure Key Vault - Certificates](/java/api/overview/azure/security-keyvault-certificates-readme) 4.1.6+ -* [Azure Key Vault - Keys](/java/api/overview/azure/security-keyvault-keys-readme) 4.2.6+ -* [Azure Key Vault - Secrets](/java/api/overview/azure/security-keyvault-secrets-readme) 4.2.6+ -* [Azure Service Bus](/java/api/overview/azure/messaging-servicebus-readme) 7.1.0+ -* [Azure Storage - Blobs](/java/api/overview/azure/storage-blob-readme) 12.11.0+ -* [Azure Storage - Blobs Batch](/java/api/overview/azure/storage-blob-batch-readme) 12.9.0+ -* [Azure Storage - Blobs Cryptography](/java/api/overview/azure/storage-blob-cryptography-readme) 12.11.0+ -* [Azure Storage - Common](/java/api/overview/azure/storage-common-readme) 12.11.0+ -* [Azure Storage - Files Data Lake](/java/api/overview/azure/storage-file-datalake-readme) 12.5.0+ -* [Azure Storage - Files Shares](/java/api/overview/azure/storage-file-share-readme) 12.9.0+ -* [Azure Storage - Queues](/java/api/overview/azure/storage-queue-readme) 12.9.0+ -* [Azure Text Analytics](/java/api/overview/azure/ai-textanalytics-readme) 5.0.4+ --[//]: # "Azure Cosmos DB 4.22.0+ due to https://github.com/Azure/azure-sdk-for-java/pull/25571" --[//]: # "the remaining above names and links scraped from https://azure.github.io/azure-sdk/releases/latest/java.html" -[//]: # "and version synched manually against the oldest version in maven central built on azure-core 1.14.0" -[//]: # "" -[//]: # "var table = document.querySelector('#tg-sb-content > div > table')" -[//]: # "var str = ''" -[//]: # "for (var i = 1, row; row = table.rows[i]; i++) {" -[//]: # " var name = row.cells[0].getElementsByTagName('div')[0].textContent.trim()" -[//]: # " var stableRow = row.cells[1]" -[//]: # " var versionBadge = stableRow.querySelector('.badge')" -[//]: # " if (!versionBadge) {" -[//]: # " continue" -[//]: # " }" -[//]: # " var version = versionBadge.textContent.trim()" -[//]: # " var link = stableRow.querySelectorAll('a')[2].href" -[//]: # " str += '* [' + name + '](' + link + ') ' + version + '\n'" -[//]: # "}" -[//]: # "console.log(str)" --## Modify telemetry --This section explains how to modify telemetry. --### Add spans by using the OpenTelemetry annotation --The simplest way to add your own spans is by using OpenTelemetry's `@WithSpan` annotation. --Spans populate the `requests` and `dependencies` tables in Application Insights. --> [!NOTE] -> This feature is only in 3.2.0 and later. --1. Add `opentelemetry-instrumentation-annotations-1.21.0.jar` (or later) to your application: -- ```xml - <dependency> - <groupId>io.opentelemetry</groupId> - <artifactId>opentelemetry-instrumentation-annotations</artifactId> - <version>1.21.0</version> - </dependency> - ``` --1. Use the `@WithSpan` annotation to emit a span each time your method is executed: -- ```java - import io.opentelemetry.instrumentation.annotations.WithSpan; -- @WithSpan(value = "your span name") - public void yourMethod() { - } - ``` --By default, the span will end up in the `dependencies` table with dependency type `InProc`. --If your method represents a background job that isn't already captured by auto-instrumentation, -we recommend that you apply the attribute `kind = SpanKind.SERVER` to the `@WithSpan` annotation -so that it will end up in the Application Insights `requests` table. --### Add spans by using the OpenTelemetry API --If the preceding OpenTelemetry `@WithSpan` annotation doesn't meet your needs, you can add your spans by using the OpenTelemetry API. --> [!NOTE] -> This feature is only in 3.2.0 and later. --1. Add `opentelemetry-api-1.0.0.jar` (or later) to your application: -- ```xml - <dependency> - <groupId>io.opentelemetry</groupId> - <artifactId>opentelemetry-api</artifactId> - <version>1.0.0</version> - </dependency> - ``` --1. Use the `GlobalOpenTelemetry` class to create a `Tracer`: -- ```java - import io.opentelemetry.api.GlobalOpenTelemetry; - import io.opentelemetry.api.trace.Tracer; -- static final Tracer tracer = GlobalOpenTelemetry.getTracer("com.example"); - ``` --1. Create a span, make it current, and then end it: -- ```java - Span span = tracer.spanBuilder("my first span").startSpan(); - try (Scope ignored = span.makeCurrent()) { - // do stuff within the context of this - } catch (Throwable t) { - span.recordException(t); - } finally { - span.end(); - } - ``` --### Add span events --You can use `opentelemetry-api` to create span events, which populate the `traces` table in Application Insights. The string passed in to `addEvent()` is saved to the `message` field within the trace. --> [!NOTE] -> This feature is only in 3.2.0 and later. --1. Add `opentelemetry-api-1.0.0.jar` (or later) to your application: -- ```xml - <dependency> - <groupId>io.opentelemetry</groupId> - <artifactId>opentelemetry-api</artifactId> - <version>1.0.0</version> - </dependency> - ``` --1. Add span events in your code: -- ```java - import io.opentelemetry.api.trace.Span; -- Span.current().addEvent("eventName"); - ``` --### Add span attributes --You can use `opentelemetry-api` to add attributes to spans. These attributes can include adding a custom business dimension to your telemetry. You can also use attributes to set optional fields in the Application Insights schema, such as User ID or Client IP. --Adding one or more span attributes populates the `customDimensions` field in the `requests`, `dependencies`, `traces`, or `exceptions` table. --> [!NOTE] -> This feature is only in 3.2.0 and later. --1. Add `opentelemetry-api-1.0.0.jar` (or later) to your application: -- ```xml - <dependency> - <groupId>io.opentelemetry</groupId> - <artifactId>opentelemetry-api</artifactId> - <version>1.0.0</version> - </dependency> - ``` --1. Add custom dimensions in your code: -- ```java - import io.opentelemetry.api.trace.Span; - import io.opentelemetry.api.common.AttributeKey; -- AttributeKey attributeKey = AttributeKey.stringKey("mycustomdimension"); - Span.current().setAttribute(attributeKey, "myvalue1"); - ``` --### Update span status and record exceptions --You can use `opentelemetry-api` to update the status of a span and record exceptions. --> [!NOTE] -> This feature is only in 3.2.0 and later. --1. Add `opentelemetry-api-1.0.0.jar` (or later) to your application: -- ```xml - <dependency> - <groupId>io.opentelemetry</groupId> - <artifactId>opentelemetry-api</artifactId> - <version>1.0.0</version> - </dependency> - ``` --1. Set status to `error` and record an exception in your code: -- ```java - import io.opentelemetry.api.trace.Span; - import io.opentelemetry.api.trace.StatusCode; -- Span span = Span.current(); - span.setStatus(StatusCode.ERROR, "errorMessage"); - span.recordException(e); - ``` --#### Set the user ID --Populate the `user ID` field in the `requests`, `dependencies`, or `exceptions` table. --Consult applicable privacy laws before you set the Authenticated User ID. --> [!NOTE] -> This feature is only in 3.2.0 and later. --1. Add `opentelemetry-api-1.0.0.jar` (or later) to your application: -- ```xml - <dependency> - <groupId>io.opentelemetry</groupId> - <artifactId>opentelemetry-api</artifactId> - <version>1.0.0</version> - </dependency> - ``` --1. Set `user_Id` in your code: -- ```java - import io.opentelemetry.api.trace.Span; -- Span.current().setAttribute("enduser.id", "myuser"); - ``` --### Get the trace ID or span ID --You can use `opentelemetry-api` to get the trace ID or span ID. This action can be done to add these identifiers to existing logging telemetry to improve correlation when you debug and diagnose issues. --> [!NOTE] -> This feature is only in 3.2.0 and later. --1. Add `opentelemetry-api-1.0.0.jar` (or later) to your application: -- ```xml - <dependency> - <groupId>io.opentelemetry</groupId> - <artifactId>opentelemetry-api</artifactId> - <version>1.0.0</version> - </dependency> - ``` --1. Get the request trace ID and the span ID in your code: -- ```java - import io.opentelemetry.api.trace.Span; -- Span span = Span.current(); - String traceId = span.getSpanContext().getTraceId(); - String spanId = span.getSpanContext().getSpanId(); - ``` --## Custom telemetry --Our goal in Application Insights Java 3.x is to allow you to send your custom telemetry by using standard APIs. --We currently support Micrometer, popular logging frameworks, and the Application Insights Java Classic SDK. Application Insights Java 3.x automatically captures the telemetry sent through these APIs and correlates it with autocollected telemetry. --### Supported custom telemetry --The following table represents currently supported custom telemetry types that you can enable to supplement the Java 3.x agent. To summarize: --- Custom metrics are supported through Micrometer.-- Custom exceptions and traces are supported through logging frameworks.-- Custom requests, dependencies, metrics, and exceptions are supported through the OpenTelemetry API.-- The remaining telemetry types are supported through the [Application Insights Classic SDK](#send-custom-telemetry-by-using-the-application-insights-classic-sdk).--| Custom telemetry type | Micrometer | Logback, Log4j, JUL | OpenTelemetry API | Classic SDK | -|--|||-|-| -| Custom events | | | | Yes | -| Custom metrics | Yes | | Yes | Yes | -| Dependencies | | | Yes | Yes | -| Exceptions | | Yes | Yes | Yes | -| Page views | | | | Yes | -| Requests | | | Yes | Yes | -| Traces | | Yes | | Yes | --### Send custom metrics by using Micrometer --1. Add Micrometer to your application: - - ```xml - <dependency> - <groupId>io.micrometer</groupId> - <artifactId>micrometer-core</artifactId> - <version>1.6.1</version> - </dependency> - ``` --1. Use the Micrometer [global registry](https://micrometer.io/docs/concepts#_global_registry) to create a meter: -- ```java - static final Counter counter = Metrics.counter("test.counter"); - ``` --1. Use the counter to record metrics: -- ```java - counter.increment(); - ``` --1. The metrics will be ingested into the - [customMetrics](/azure/azure-monitor/reference/tables/custommetrics) table, with tags captured in the - `customDimensions` column. You can also view the metrics in the - [metrics explorer](../essentials/metrics-getting-started.md) under the `Log-based metrics` metric namespace. -- > [!NOTE] - > Application Insights Java replaces all non-alphanumeric characters (except dashes) in the Micrometer metric name with underscores. As a result, the preceding `test.counter` metric will show up as `test_counter`. --### Send custom traces and exceptions by using your favorite logging framework --Logback, Log4j, and java.util.logging are auto-instrumented. Logging performed via these logging frameworks is autocollected as trace and exception telemetry. --By default, logging is only collected when that logging is performed at the INFO level or higher. -To change this level, see the [configuration options](./java-standalone-config.md#auto-collected-logging). --Structured logging (attaching custom dimensions to your logs) can be accomplished in these ways: -* [Logback MDC](http://logback.qos.ch/manual/mdc.html) -* [Log4j 2 MapMessage](https://logging.apache.org/log4j/2.x/log4j-api/apidocs/org/apache/logging/log4j/message/MapMessage.html) (a `MapMessage` key of `"message"` will be captured as the log message) -* [Log4j 2 Thread Context](https://logging.apache.org/log4j/2.x/manual/thread-context.html) -* [Log4j 1.2 MDC](https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/MDC.html) --### Send custom telemetry by using the Application Insights Classic SDK --1. Add `applicationinsights-core` to your application: -- ```xml - <dependency> - <groupId>com.microsoft.azure</groupId> - <artifactId>applicationinsights-core</artifactId> - <version>3.4.9</version> - </dependency> - ``` --1. Create a `TelemetryClient` instance: - - ```java - static final TelemetryClient telemetryClient = new TelemetryClient(); - ``` --1. Use the client to send custom telemetry: -- ##### Events - - ```java - telemetryClient.trackEvent("WinGame"); - ``` - - ##### Metrics - - ```java - telemetryClient.trackMetric("queueLength", 42.0); - ``` - - ##### Dependencies - - ```java - boolean success = false; - long startTime = System.currentTimeMillis(); - try { - success = dependency.call(); - } finally { - long endTime = System.currentTimeMillis(); - RemoteDependencyTelemetry telemetry = new RemoteDependencyTelemetry(); - telemetry.setSuccess(success); - telemetry.setTimestamp(new Date(startTime)); - telemetry.setDuration(new Duration(endTime - startTime)); - telemetryClient.trackDependency(telemetry); - } - ``` - - ##### Logs - - ```java - telemetryClient.trackTrace(message, SeverityLevel.Warning, properties); - ``` - - ##### Exceptions - - ```java - try { - ... - } catch (Exception e) { - telemetryClient.trackException(e); - } - ``` --## Troubleshooting --See the dedicated [troubleshooting article](java-standalone-troubleshoot.md). ---## Release notes --See the [release notes](https://github.com/microsoft/ApplicationInsights-Java/releases) on GitHub. --## Support --To get support: --- For help with troubleshooting, review the [troubleshooting steps](java-standalone-troubleshoot.md).-- For Azure support issues, open an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/).-- For OpenTelemetry issues, contact the [OpenTelemetry community](https://opentelemetry.io/community/) directly.--## OpenTelemetry feedback --To provide feedback: --- Fill out the OpenTelemetry community's [customer feedback survey](https://docs.google.com/forms/d/e/1FAIpQLScUt4reClurLi60xyHwGozgM9ZAz8pNAfBHhbTZ4gFWaaXIRQ/viewform).-- Tell Microsoft about yourself by joining our [OpenTelemetry Early Adopter Community](https://aka.ms/AzMonOTel/).-- Engage with other Azure Monitor users in the [Microsoft Tech Community](https://techcommunity.microsoft.com/t5/azure-monitor/bd-p/AzureMonitor).-- Make a feature request at the [Azure Feedback Forum](https://feedback.azure.com/d365community/forum/8849e04d-1325-ec11-b6e6-000d3a4f09d0).--## Next steps --- Review [Java auto-instrumentation configuration options](java-standalone-config.md).-- To review the source code, see the [Azure Monitor Java auto-instrumentation GitHub repository](https://github.com/Microsoft/ApplicationInsights-Java).-- To learn more about OpenTelemetry and its community, see the [OpenTelemetry Java GitHub repository](https://github.com/open-telemetry/opentelemetry-java-instrumentation).-- To enable usage experiences, see [Enable web or browser user monitoring](javascript.md). |
azure-monitor | Java Standalone Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md | If your application uses [Micrometer](https://micrometer.io), metrics that are s Also, if your application uses [Spring Boot Actuator](https://docs.spring.io/spring-boot/docs/current/reference/html/production-ready-features.html), metrics configured by Spring Boot Actuator are also auto-collected. +To send custom metrics using micrometer: ++1. Add Micrometer to your application: + + ```xml + <dependency> + <groupId>io.micrometer</groupId> + <artifactId>micrometer-core</artifactId> + <version>1.6.1</version> + </dependency> + ``` ++1. Use the Micrometer [global registry](https://micrometer.io/docs/concepts#_global_registry) to create a meter: ++ ```java + static final Counter counter = Metrics.counter("test.counter"); + ``` ++1. Use the counter to record metrics: ++ ```java + counter.increment(); + ``` ++1. The metrics will be ingested into the + [customMetrics](/azure/azure-monitor/reference/tables/custommetrics) table, with tags captured in the + `customDimensions` column. You can also view the metrics in the + [metrics explorer](../essentials/metrics-getting-started.md) under the `Log-based metrics` metric namespace. ++ > [!NOTE] + > Application Insights Java replaces all non-alphanumeric characters (except dashes) in the Micrometer metric name with underscores. As a result, the preceding `test.counter` metric will show up as `test_counter`. + To disable auto-collection of Micrometer metrics and Spring Boot Actuator metrics: > [!NOTE] |
azure-monitor | Javascript Click Analytics Plugin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-click-analytics-plugin.md | This plugin automatically tracks click events on web pages and uses data-* attri Users can set up the Click Analytics Auto-collection plugin via npm. -### NPM setup +### npm setup Install npm package: const appInsights = new ApplicationInsights({ config: configObj }); appInsights.loadAppInsights(); ``` -## Snippet Setup (ignore if using NPM setup) +## Snippet Setup (ignore if using npm setup) ```html <script type="text/javascript" src="https://js.monitor.azure.com/scripts/b/ext/ai.clck.2.6.2.min.js"></script> In JavaScript correlation is turned off by default in order to minimize the tele ## Next steps - Check out the [documentation on utilizing HEART Workbook](usage-heart.md) for expanded product analytics.-- Check out the [GitHub Repository](https://github.com/microsoft/ApplicationInsights-JS/tree/master/extensions/applicationinsights-clickanalytics-js) and [NPM Package](https://www.npmjs.com/package/@microsoft/applicationinsights-clickanalytics-js) for the Click Analytics Auto-Collection Plugin.+- Check out the [GitHub Repository](https://github.com/microsoft/ApplicationInsights-JS/tree/master/extensions/applicationinsights-clickanalytics-js) and [npm Package](https://www.npmjs.com/package/@microsoft/applicationinsights-clickanalytics-js) for the Click Analytics Auto-Collection Plugin. - Use [Events Analysis in Usage Experience](usage-segmentation.md) to analyze top clicks and slice by available dimensions. - Find click data under content field within customDimensions attribute in CustomEvents table in [Log Analytics](../logs/log-analytics-tutorial.md#write-a-query). For more information, see [Sample App](https://go.microsoft.com/fwlink/?linkid=2152871).-- Build a [Workbook](../visualize/workbooks-overview.md) or [export to Power BI](../logs/log-powerbi.md#integrate-queries) to create custom visualizations of click data.+- Build a [Workbook](../visualize/workbooks-overview.md) or [export to Power BI](../logs/log-powerbi.md) to create custom visualizations of click data. |
azure-monitor | Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript.md | -Application Insights can be used with any webpages by adding a short piece of JavaScript. Node.js has a [standalone SDK](nodejs.md). If your web service is [Java](java-in-process-agent.md) or [ASP.NET](asp-net.md), you can use the server-side SDKs with the client-side JavaScript SDK to get an end-to-end understanding of your app's performance. +Application Insights can be used with any webpages by adding a short piece of JavaScript. Node.js has a [standalone SDK](nodejs.md). If your web service is [Java](opentelemetry-enable.md?tabs=java) or [ASP.NET](asp-net.md), you can use the server-side SDKs with the client-side JavaScript SDK to get an end-to-end understanding of your app's performance. ## Add the JavaScript SDK |
azure-monitor | Kubernetes Codeless | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/kubernetes-codeless.md | -> Currently you can enable monitoring for your Java apps running on Kubernetes without instrumenting your code - use the [Java standalone agent](./java-in-process-agent.md). +> Currently you can enable monitoring for your Java apps running on Kubernetes without instrumenting your code - use the [Java standalone agent](./opentelemetry-enable.md?tabs=java). > While the solution to seamlessly enabling application monitoring is in the works for other languages, use the SDKs to monitor your apps running on AKS: [ASP.NET Core](./asp-net-core.md), [ASP.NET](./asp-net.md), [Node.js](./nodejs.md), [JavaScript](./javascript.md), and [Python](./opencensus-python.md). ## Application monitoring without instrumenting the code For a complete list of supported auto-instrumentation scenarios, see [Supported ## Java Once enabled, the Java agent will automatically collect a multitude of requests, dependencies, logs, and metrics from the most widely used libraries and frameworks. -Follow [the detailed instructions](./java-in-process-agent.md) to monitor your Java apps running in Kubernetes apps, as well as other environments. +Follow [the detailed instructions](./opentelemetry-enable.md?tabs=java) to monitor your Java apps running in Kubernetes apps, as well as other environments. ## Other languages |
azure-monitor | Live Stream | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md | Live Metrics is currently supported for ASP.NET, ASP.NET Core, Azure Functions, * [ASP.NET Core](./asp-net-core.md): Live Metrics is enabled by default. * [.NET/.NET Core Console/Worker](./worker-service.md): Live Metrics is enabled by default. * [.NET Applications: Enable using code](#enable-live-metrics-by-using-code-for-any-net-application).- * [Java](./java-in-process-agent.md): Live Metrics is enabled by default. + * [Java](./opentelemetry-enable.md?tabs=java): Live Metrics is enabled by default. * [Node.js](./nodejs.md#live-metrics) 1. In the [Azure portal](https://portal.azure.com), open the Application Insights resource for your app. Then open Live Stream. |
azure-monitor | Monitor Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-functions.md | To collect custom telemetry from services such as Redis, Memcached, MongoDB, and * Read more instructions and information about monitoring [Monitoring Azure Functions](../../azure-functions/functions-monitoring.md) * Get an overview of [Distributed Tracing](./distributed-tracing.md) * See what [Application Map](./app-map.md?tabs=net) can do for your business-* Read about [requests and dependencies for Java apps](./java-in-process-agent.md) +* Read about [requests and dependencies for Java apps](./opentelemetry-enable.md?tabs=java) * Learn more about [Azure Monitor](../overview.md) and [Application Insights](./app-insights-overview.md) |
azure-monitor | Opentelemetry Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md | Title: Enable Azure Monitor OpenTelemetry for .NET, Node.js, and Python applications + Title: Enable Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python applications description: This article provides guidance on how to enable Azure Monitor on applications by using OpenTelemetry. Last updated 01/10/2023 ms.devlang: csharp, javascript, typescript, python -# Enable Azure Monitor OpenTelemetry for .NET, Node.js, and Python applications (preview) +# Enable Azure Monitor OpenTelemetry for .NET, Node.js, Python and Java applications -The Azure Monitor OpenTelemetry Exporter is a component that sends traces, and metrics (and eventually all application telemetry) to Azure Monitor Application Insights. To learn more about OpenTelemetry concepts, see the [OpenTelemetry overview](opentelemetry-overview.md) or [OpenTelemetry FAQ](/azure/azure-monitor/faq#opentelemetry). +This article describes how to enable and configure OpenTelemetry-based data collection to power the experiences within [Azure Monitor Application Insights](app-insights-overview.md#application-insights-overview). To learn more about OpenTelemetry concepts, see the [OpenTelemetry overview](opentelemetry-overview.md) or [OpenTelemetry FAQ](/azure/azure-monitor/faq#opentelemetry). -This article describes how to enable and configure the OpenTelemetry-based Azure Monitor Preview offerings. After you finish the instructions in this article, you'll be able to send OpenTelemetry traces and metrics to Azure Monitor Application Insights. +## OpenTelemetry Release Status -> [!IMPORTANT] -> The Azure Monitor OpenTelemetry-based Offerings for .NET, Node.js, and Python applications are currently in preview. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. --## Limitations of the preview release --### [.NET](#tab/net) --Consider whether this preview is right for you. It *enables distributed tracing, metrics* and _excludes_: ---If you require a full-feature experience, use the existing Application Insights [ASP.NET](asp-net.md), or [ASP.NET Core](asp-net-core.md) SDK until the OpenTelemetry-based offering matures. --### [Node.js (JavaScript)](#tab/nodejs-javascript) --Consider whether this preview is right for you. It *enables distributed tracing, metrics* and _excludes_: --- [Live Metrics](live-stream.md)--If you require a full-feature experience, use the existing [Application Insights Node.js SDK](nodejs.md) until the OpenTelemetry-based offering matures. --> [!WARNING] -> At present, this exporter only works for Node.js environments. Use the [Application Insights JavaScript SDK](javascript.md) for web and browser scenarios. --### [Node.js (TypeScript)](#tab/nodejs-typescript) --Consider whether this preview is right for you. It *enables distributed tracing, metrics* and _excludes_: -+OpenTelemetry offerings are available for .NET, Node.js, Python and Java applications. -If you require a full-feature experience, use the existing [Application Insights Node.js SDK](nodejs.md) until the OpenTelemetry-based offering matures. +|Language |Release Status | +||-| +|Java | :white_check_mark: <sup>[1](#GA)</sup> | +|.NET | :warning: <sup>[2](#PREVIEW)</sup> | +|Node.js | :warning: <sup>[2](#PREVIEW)</sup> | +|Python | :warning: <sup>[2](#PREVIEW)</sup> | -> [!WARNING] -> At present, this exporter only works for Node.js environments. Use the [Application Insights JavaScript SDK](javascript.md) for web and browser scenarios. --### [Python](#tab/python) --Consider whether this preview is right for you. It *enables distributed tracing, metrics* and _excludes_: +**Footnotes** +- <a name="GA"> :white_check_mark: 1</a>: OpenTelemetry is available to all customers with formal support. +- <a name="PREVIEW"> :warning: 2</a>: OpenTelemetry is available as a public preview. [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) --If you require a full-feature experience, use the existing [Application Insights Python-OpenCensus SDK](opencensus-python.md) until the OpenTelemetry-based offering matures. --+> [!NOTE] +> For a feature-by-feature release status, see the [FAQ](../faq.yml#what-is-the-current-release-state-of-features-within-each-opentelemetry-offering-). ## Get started Follow the steps in this section to instrument your application with OpenTelemet ### Prerequisites -- Azure subscription: [Create an Azure subscription for free](https://azure.microsoft.com/free/)-- Application Insights resource: [Create an Application Insights resource](create-workspace-resource.md#create-a-workspace-based-resource)+- An Azure subscription: [Create an Azure subscription for free](https://azure.microsoft.com/free/) +- An Application Insights resource: [Create an Application Insights resource](create-workspace-resource.md#create-a-workspace-based-resource) ++<!NOTE TO CONTRIBUTORS: PLEASE DO NOT SEPARATE OUT JAVASCRIPT AND TYPESCRIPT INTO DIFFERENT TABS.> ### [.NET](#tab/net) - Application using an officially supported version of [.NET Core](https://dotnet.microsoft.com/download/dotnet) or [.NET Framework](https://dotnet.microsoft.com/download/dotnet-framework) that's at least .NET Framework 4.6.2 -### [Node.js (JavaScript)](#tab/nodejs-javascript) +### [Java](#tab/java) -- Application using an officially [supported version](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/monitor/monitor-opentelemetry-exporter#currently-supported-environments) of Node.js runtime:- - [OpenTelemetry supported runtimes](https://github.com/open-telemetry/opentelemetry-js#supported-runtimes) - - [Azure Monitor OpenTelemetry Exporter supported runtimes](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/monitor/monitor-opentelemetry-exporter#currently-supported-environments) +- A Java application using Java 8+ -### [Node.js (TypeScript)](#tab/nodejs-typescript) +### [Node.js](#tab/nodejs) - Application using an officially [supported version](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/monitor/monitor-opentelemetry-exporter#currently-supported-environments) of Node.js runtime: - [OpenTelemetry supported runtimes](https://github.com/open-telemetry/opentelemetry-js#supported-runtimes) If you get an error like "There are no versions available for the package Azure. dotnet add package --prerelease Azure.Monitor.OpenTelemetry.Exporter -s https://api.nuget.org/v3/index.json ``` -#### [Node.js (JavaScript)](#tab/nodejs-javascript) --Install these packages: --- [@opentelemetry/sdk-trace-base](https://www.npmjs.com/package/@opentelemetry/sdk-trace-base)-- [@opentelemetry/sdk-trace-node](https://www.npmjs.com/package/@opentelemetry/sdk-trace-node)-- [@azure/monitor-opentelemetry-exporter](https://www.npmjs.com/package/@azure/monitor-opentelemetry-exporter)-- [@opentelemetry/api](https://www.npmjs.com/package/@opentelemetry/api)--```sh -npm install @opentelemetry/sdk-trace-base -npm install @opentelemetry/sdk-trace-node -npm install @azure/monitor-opentelemetry-exporter -npm install @opentelemetry/api -``` --The following packages are also used for some specific scenarios described later in this article: +#### [Java](#tab/java) -- [@opentelemetry/sdk-metrics](https://www.npmjs.com/package/@opentelemetry/sdk-metrics)-- [@opentelemetry/resources](https://www.npmjs.com/package/@opentelemetry/resources)-- [@opentelemetry/semantic-conventions](https://www.npmjs.com/package/@opentelemetry/semantic-conventions)-- [@opentelemetry/instrumentation-http](https://www.npmjs.com/package/@opentelemetry/instrumentation-http)+Download the [applicationinsights-agent-3.4.8.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.8/applicationinsights-agent-3.4.8.jar) file. -```sh -npm install @opentelemetry/sdk-metrics -npm install @opentelemetry/resources -npm install @opentelemetry/semantic-conventions -npm install @opentelemetry/instrumentation-http -``` +> [!WARNING] +> +> If you are upgrading from an earlier 3.x version, you may be impacted by changing defaults or slight differences in the data we collect. See the migration notes at the top of the release notes for +> [3.4.0](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.4.0), +> [3.3.0](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.3.0), +> [3.2.0](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.2.0), and +> [3.1.0](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.1.0) +> for more details. -#### [Node.js (TypeScript)](#tab/nodejs-typescript) +#### [Node.js](#tab/nodejs) Install these packages: pip install azure-monitor-opentelemetry-exporter --pre This section provides guidance that shows how to enable OpenTelemetry. -#### Add OpenTelemetry instrumentation code +#### Instrument with OpenTelemetry ##### [.NET](#tab/net) public class Program > [!NOTE] > The `Activity` and `ActivitySource` classes from the `System.Diagnostics` namespace represent the OpenTelemetry concepts of `Span` and `Tracer`, respectively. You create `ActivitySource` directly by using its constructor instead of by using `TracerProvider`. Each [`ActivitySource`](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/docs/trace/customizing-the-sdk#activity-source) class must be explicitly connected to `TracerProvider` by using `AddSource()`. That's because parts of the OpenTelemetry tracing API are incorporated directly into the .NET runtime. To learn more, see [Introduction to OpenTelemetry .NET Tracing API](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Api/README.md#introduction-to-opentelemetry-net-tracing-api). -##### [Node.js (JavaScript)](#tab/nodejs-javascript) +##### [Java](#tab/java) ++Java auto-instrumentation is enabled through configuration changes; no code changes are required. ++Point the JVM to the jar file by adding `-javaagent:"path/to/applicationinsights-agent-3.4.8.jar"` to your application's JVM args. ++> [!TIP] +> For help with configuring your application's JVM args, see [Tips for updating your JVM args](./java-standalone-arguments.md). + +If you develop a Spring Boot application, you can optionally replace the JVM argument by a programmatic configuration. For more information, see [Using Azure Monitor Application Insights with Spring Boot](./java-spring-boot.md). ++##### [Node.js](#tab/nodejs) The following code demonstrates how to enable OpenTelemetry in a simple JavaScript application: function doWork(parent) { ``` -##### [Node.js (TypeScript)](#tab/nodejs-typescript) --The following code demonstrates how to enable OpenTelemetry in a simple TypeScript application: --```typescript -import { AzureMonitorTraceExporter } from "@azure/monitor-opentelemetry-exporter"; -import { BatchSpanProcessor} from "@opentelemetry/sdk-trace-base"; -import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node"; -import { Context, context, Span, SpanOptions, trace, Tracer } from "@opentelemetry/api"; --const provider = new NodeTracerProvider(); -provider.register(); --// Create an exporter instance. -const exporter = new AzureMonitorTraceExporter({ - connectionString: "<Your Connection String>" -}); --// Add the exporter to the provider. -provider.addSpanProcessor( - new BatchSpanProcessor(exporter) -); --// Create a tracer. -const tracer: Tracer = trace.getTracer("example-basic-tracer-node"); --// Create a span. A span must be closed. -const parentSpan: Span = tracer.startSpan("main"); --for (let i = 0; i < 10; i += 1) { - doWork(parentSpan); -} --// Be sure to end the span. -parentSpan.end(); --function doWork(parent: Span) { - // Start another span. In this example, the main method already started a - // span, so that will be the parent span, and this will be a child span. - const ctx: Context = trace.setSpan(context.active(), parent); -- // Set attributes to the span. - // Check the SpanOptions interface for more options that can be set into the span creation - const options: SpanOptions = { - attributes: { - "key": "value" - } - }; -- // Create a span and attach the span options and parent span context. - const span: Span = tracer.startSpan("doWork", options, ctx); -- // Simulate some random work. - for (let i = 0; i <= Math.floor(Math.random() * 40000000); i += 1) { - // empty - } -- // Annotate our span to capture metadata about our operation. - span.addEvent("invoking doWork"); -- // Mark the end of span execution. - span.end(); -} -``` - ##### [Python](#tab/python) The following code demonstrates how to enable OpenTelemetry in a simple Python application: with tracer.start_as_current_span("hello"): > [!TIP]-> Add [instrumentation libraries](#instrumentation-libraries) to autocollect telemetry across popular frameworks and libraries. +> For .NET, Node.js, and Python, you'll need to manually add [instrumentation libraries](#instrumentation-libraries) to autocollect telemetry across popular frameworks and libraries. For Java, these instrumentation libraries are already included and no additional steps are required. #### Set the Application Insights connection string -Replace the placeholder `<Your Connection String>` in the preceding code with the connection string from *your* Application Insights resource. +You can find your connection string in the Overview Pane of your Application Insights Resource. :::image type="content" source="media/opentelemetry/connection-string.png" alt-text="Screenshot of the Application Insights connection string."::: +Here's how you set the connection string. ++#### [.NET](#tab/net) ++Replace the `<Your Connection String>` in the preceding code with the connection string from *your* Application Insights resource. ++#### [Java](#tab/java) ++Use one of the following two ways to point the jar file to your Application Insights resource: ++- Set an environment variable: + + ```console + APPLICATIONINSIGHTS_CONNECTION_STRING=<Your Connection String> + ``` + +- Create a configuration file named `applicationinsights.json`, and place it in the same directory as `applicationinsights-agent-3.4.8.jar` with the following content: + + ```json + { + "connectionString": "<Your Connection String>" + } + ``` ++#### [Node.js](#tab/nodejs) ++Replace the `<Your Connection String>` in the preceding code with the connection string from *your* Application Insights resource. ++#### [Python](#tab/python) ++Replace the `<Your Connection String>` in the preceding code with the connection string from *your* Application Insights resource. +++ #### Confirm data is flowing Run your application and open your **Application Insights Resource** tab in the Azure portal. It might take a few minutes for data to show up in the portal. Run your application and open your **Application Insights Resource** tab in the > [!IMPORTANT] > If you have two or more services that emit telemetry to the same Application Insights resource, you're required to [set Cloud Role Names](#set-the-cloud-role-name-and-the-cloud-role-instance) to represent them properly on the Application Map. -As part of using Application Insights instrumentation, we collect and send diagnostic data to Microsoft. This data helps us run and improve Application Insights. You may disable nonessential data collection. To learn more, see [Statsbeat in Azure Application Insights](./statsbeat.md). +As part of using Application Insights instrumentation, we collect and send diagnostic data to Microsoft. This data helps us run and improve Application Insights. To learn more, see [Statsbeat in Azure Application Insights](./statsbeat.md). ## Set the Cloud Role Name and the Cloud Role Instance -You might set the [Cloud Role Name](app-map.md#understand-the-cloud-role-name-within-the-context-of-an-application-map) and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. This step updates Cloud Role Name and Cloud Role Instance from their default values to something that makes sense to your team. They'll appear on the Application Map as the name underneath a node. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. +You might want to update the [Cloud Role Name](app-map.md#understand-the-cloud-role-name-within-the-context-of-an-application-map) and the Cloud Role Instance from the default values to something that makes sense to your team. They'll appear on the Application Map as the name underneath a node. ### [.NET](#tab/net) +Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [Resource Semantic Conventions](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/semantic_conventions/README.md). + ```csharp // Setting role name and role instance var resourceAttributes = new Dictionary<string, object> { var tracerProvider = Sdk.CreateTracerProviderBuilder() .Build(); ``` -### [Node.js (JavaScript)](#tab/nodejs-javascript) +### [Java](#tab/java) ++To set the cloud role name, see [cloud role name](java-standalone-config.md#cloud-role-name). ++To set the cloud role instance, see [cloud role instance](java-standalone-config.md#cloud-role-instance). ++### [Node.js](#tab/nodejs) ++Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [Resource Semantic Conventions](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/semantic_conventions/README.md). ```javascript ... const meterProvider = new MeterProvider({ }); ``` -### [Node.js (TypeScript)](#tab/nodejs-typescript) --```typescript -import { Resource } from "@opentelemetry/resources"; -import { SemanticResourceAttributes } from "@opentelemetry/semantic-conventions"; -import { NodeTracerConfig, NodeTracerProvider } from "@opentelemetry/sdk-trace-node"; -import { MeterProvider, MeterProviderOptions } from "@opentelemetry/sdk-metrics"; --// - -// Setting role name and role instance -// - -const testResource = new Resource({ - [SemanticResourceAttributes.SERVICE_NAME]: "my-helloworld-service", - [SemanticResourceAttributes.SERVICE_NAMESPACE]: "my-namespace", - [SemanticResourceAttributes.SERVICE_INSTANCE_ID]: "my-instance", -}); --const tracerProviderConfig: NodeTracerConfig = { - resource: testResource -}; -const meterProviderConfig: MeterProviderOptions = { - resource: testResource -}; --// - -// Done setting role name and role instance -// - -const tracerProvider = new NodeTracerProvider(tracerProviderConfig); -const meterProvider = new MeterProvider(meterProviderConfig); -... -``` - ### [Python](#tab/python) +Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [Resource Semantic Conventions](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/semantic_conventions/README.md). + ```python ... from opentelemetry.sdk.resources import SERVICE_NAME, SERVICE_NAMESPACE, SERVICE_INSTANCE_ID, Resource trace.set_tracer_provider( -For information on standard attributes for resources, see [Resource Semantic Conventions](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/semantic_conventions/README.md). - ## Enable Sampling -You may want to enable sampling to reduce your data ingestion volume, which reduces your cost. Azure Monitor provides a custom *fixed-rate* sampler that populates events with a "sampling ratio", which Application Insights converts to "ItemCount". The *fixed-rate* sampler ensures accurate experiences and event counts. The sampler is designed to preserve your traces across services, and it's interoperable with older Application Insights SDKs. The sampler expects a sample rate of between 0 and 1 inclusive. A rate of 0.1 means approximately 10% of your traces will be sent. For more information, see [Learn More about sampling](sampling.md#brief-summary). +You may want to enable sampling to reduce your data ingestion volume, which reduces your cost. Azure Monitor provides a custom *fixed-rate* sampler that populates events with a "sampling ratio", which Application Insights converts to "ItemCount". The *fixed-rate* sampler ensures accurate experiences and event counts. The sampler is designed to preserve your traces across services, and it's interoperable with older Application Insights SDKs. For more information, see [Learn More about sampling](sampling.md#brief-summary). > [!NOTE] > Metrics are unaffected by sampling. #### [.NET](#tab/net) +The sampler expects a sample rate of between 0 and 1 inclusive. A rate of 0.1 means approximately 10% of your traces will be sent. + In this example, we utilize the `ApplicationInsightsSampler`, which offers compatibility with Application Insights SDKs. ```dotnetcli var tracerProvider = Sdk.CreateTracerProviderBuilder() .Build(); ``` -#### [Node.js (JavaScript)](#tab/nodejs-javascript) +#### [Java](#tab/java) ++Starting from 3.4.0, rate-limited sampling is available and is now the default. See [sampling]( java-standalone-config.md#sampling) for more information. ++#### [Node.js](#tab/nodejs) ```javascript const { BasicTracerProvider, SimpleSpanProcessor } = require("@opentelemetry/sdk-trace-base"); provider.addSpanProcessor(new SimpleSpanProcessor(exporter)); provider.register(); ``` -#### [Node.js (TypeScript)](#tab/nodejs-typescript) --```typescript -import { BasicTracerProvider, SimpleSpanProcessor } from "@opentelemetry/sdk-trace-base"; -import { ApplicationInsightsSampler, AzureMonitorTraceExporter } from "@azure/monitor-opentelemetry-exporter"; --// Sampler expects a sample rate of between 0 and 1 inclusive -// A rate of 0.1 means approximately 10% of your traces are sent -const aiSampler = new ApplicationInsightsSampler(0.75); -const provider = new BasicTracerProvider({ - sampler: aiSampler -}); -const exporter = new AzureMonitorTraceExporter({ - connectionString: "<Your Connection String>" -}); -provider.addSpanProcessor(new SimpleSpanProcessor(exporter)); -provider.register(); -``` - #### [Python](#tab/python) In this example, we utilize the `ApplicationInsightsSampler`, which offers compatibility with Application Insights SDKs. for i in range(100): > [!TIP]-> If you're not sure where to set the sampling rate, start at 5% (i.e., 0.05 sampling ratio) and adjust the rate based on the accuracy of the operations shown in the failures and performance panes. A higher rate generally results in higher accuracy. However, ANY sampling will affect accuracy so we recommend alerting on [OpenTelemetry metrics](#metrics), which are unaffected by sampling. +> When using fixed-rate/percentage sampling and you aren't sure what to set the sampling rate as, start at 5% (i.e., 0.05 sampling ratio) and adjust the rate based on the accuracy of the operations shown in the failures and performance blades. A higher rate generally results in higher accuracy. However, ANY sampling will affect accuracy so we recommend alerting on [OpenTelemetry metrics](#metrics), which are unaffected by sampling. ## Instrumentation libraries -The following libraries are validated to work with the preview release. +The following libraries are validated to work with the current release. > [!WARNING]-> Instrumentation libraries are based on experimental OpenTelemetry specifications. Microsoft's *preview* support commitment is to ensure that the following libraries emit data to Azure Monitor Application Insights, but it's possible that breaking changes or experimental mapping will block some data elements. +> Instrumentation libraries are based on experimental OpenTelemetry specifications, which impacts languages in [preview status](#opentelemetry-release-status). Microsoft's *preview* support commitment is to ensure that the following libraries emit data to Azure Monitor Application Insights, but it's possible that breaking changes or experimental mapping will block some data elements. ### Distributed Tracing #### [.NET](#tab/net) Requests-- [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.AspNet-1.0.0-rc9.6/src/OpenTelemetry.Instrumentation.AspNet/README.md) (1) version:+- [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.AspNet-1.0.0-rc9.6/src/OpenTelemetry.Instrumentation.AspNet/README.md) <sup>[1](#FOOTNOTEONE)</sup> version: [1.0.0-rc9.6](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.AspNet/1.0.0-rc9.6) - [ASP.NET- Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.AspNetCore/README.md) (1) version: + Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.AspNetCore/README.md) <sup>[1](#FOOTNOTEONE)</sup> version: [1.0.0-rc9.7](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.AspNetCore/1.0.0-rc9.7) Dependencies - [HTTP- clients](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.Http/README.md) (1) version: + clients](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.Http/README.md) <sup>[1](#FOOTNOTEONE)</sup> version: [1.0.0-rc9.7](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Http/1.0.0-rc9.7) - [SQL- client](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.SqlClient/README.md) (1) version: + client](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.SqlClient/README.md) <sup>[1](#FOOTNOTEONE)</sup> version: [1.0.0-rc9.7](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.SqlClient/1.0.0-rc9.7) -#### [Node.js (JavaScript)](#tab/nodejs-javascript) --Requests/Dependencies -- [http/https](https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/opentelemetry-instrumentation-http/README.md) version:- [0.33.0](https://www.npmjs.com/package/@opentelemetry/instrumentation-http/v/0.33.0) - -Dependencies -- [mysql](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-mysql) version:- [0.25.0](https://www.npmjs.com/package/@opentelemetry/instrumentation-mysql/v/0.25.0) --#### [Node.js (TypeScript)](#tab/nodejs-typescript) +#### [Java](#tab/java) ++Java 3.x includes the following auto-instrumentation. ++Autocollected requests: ++* JMS consumers +* Kafka consumers +* Netty +* Quartz +* Servlets +* Spring scheduling ++ > [!NOTE] + > Servlet and Netty auto-instrumentation covers the majority of Java HTTP services, including Java EE, Jakarta EE, Spring Boot, Quarkus, and Micronaut. ++Autocollected dependencies (plus downstream distributed trace propagation): ++* Apache HttpClient +* Apache HttpAsyncClient +* AsyncHttpClient +* Google HttpClient +* gRPC +* java.net.HttpURLConnection +* Java 11 HttpClient +* JAX-RS client +* Jetty HttpClient +* JMS +* Kafka +* Netty client +* OkHttp ++Autocollected dependencies (without downstream distributed trace propagation): ++* Cassandra +* JDBC +* MongoDB (async and sync) +* Redis (Lettuce and Jedis) ++Telemetry emitted by these Azure SDKs is automatically collected by default: ++* [Azure App Configuration](/java/api/overview/azure/data-appconfiguration-readme) 1.1.10+ +* [Azure Cognitive Search](/java/api/overview/azure/search-documents-readme) 11.3.0+ +* [Azure Communication Chat](/java/api/overview/azure/communication-chat-readme) 1.0.0+ +* [Azure Communication Common](/java/api/overview/azure/communication-common-readme) 1.0.0+ +* [Azure Communication Identity](/java/api/overview/azure/communication-identity-readme) 1.0.0+ +* [Azure Communication Phone Numbers](/java/api/overview/azure/communication-phonenumbers-readme) 1.0.0+ +* [Azure Communication SMS](/java/api/overview/azure/communication-sms-readme) 1.0.0+ +* [Azure Cosmos DB](/java/api/overview/azure/cosmos-readme) 4.22.0+ +* [Azure Digital Twins - Core](/java/api/overview/azure/digitaltwins-core-readme) 1.1.0+ +* [Azure Event Grid](/java/api/overview/azure/messaging-eventgrid-readme) 4.0.0+ +* [Azure Event Hubs](/java/api/overview/azure/messaging-eventhubs-readme) 5.6.0+ +* [Azure Event Hubs - Azure Blob Storage Checkpoint Store](/java/api/overview/azure/messaging-eventhubs-checkpointstore-blob-readme) 1.5.1+ +* [Azure Form Recognizer](/java/api/overview/azure/ai-formrecognizer-readme) 3.0.6+ +* [Azure Identity](/java/api/overview/azure/identity-readme) 1.2.4+ +* [Azure Key Vault - Certificates](/java/api/overview/azure/security-keyvault-certificates-readme) 4.1.6+ +* [Azure Key Vault - Keys](/java/api/overview/azure/security-keyvault-keys-readme) 4.2.6+ +* [Azure Key Vault - Secrets](/java/api/overview/azure/security-keyvault-secrets-readme) 4.2.6+ +* [Azure Service Bus](/java/api/overview/azure/messaging-servicebus-readme) 7.1.0+ +* [Azure Storage - Blobs](/java/api/overview/azure/storage-blob-readme) 12.11.0+ +* [Azure Storage - Blobs Batch](/java/api/overview/azure/storage-blob-batch-readme) 12.9.0+ +* [Azure Storage - Blobs Cryptography](/java/api/overview/azure/storage-blob-cryptography-readme) 12.11.0+ +* [Azure Storage - Common](/java/api/overview/azure/storage-common-readme) 12.11.0+ +* [Azure Storage - Files Data Lake](/java/api/overview/azure/storage-file-datalake-readme) 12.5.0+ +* [Azure Storage - Files Shares](/java/api/overview/azure/storage-file-share-readme) 12.9.0+ +* [Azure Storage - Queues](/java/api/overview/azure/storage-queue-readme) 12.9.0+ +* [Azure Text Analytics](/java/api/overview/azure/ai-textanalytics-readme) 5.0.4+ ++[//]: # "Azure Cosmos DB 4.22.0+ due to https://github.com/Azure/azure-sdk-for-java/pull/25571" ++[//]: # "the remaining above names and links scraped from https://azure.github.io/azure-sdk/releases/latest/java.html" +[//]: # "and version synched manually against the oldest version in maven central built on azure-core 1.14.0" +[//]: # "" +[//]: # "var table = document.querySelector('#tg-sb-content > div > table')" +[//]: # "var str = ''" +[//]: # "for (var i = 1, row; row = table.rows[i]; i++) {" +[//]: # " var name = row.cells[0].getElementsByTagName('div')[0].textContent.trim()" +[//]: # " var stableRow = row.cells[1]" +[//]: # " var versionBadge = stableRow.querySelector('.badge')" +[//]: # " if (!versionBadge) {" +[//]: # " continue" +[//]: # " }" +[//]: # " var version = versionBadge.textContent.trim()" +[//]: # " var link = stableRow.querySelectorAll('a')[2].href" +[//]: # " str += '* [' + name + '](' + link + ') ' + version + '\n'" +[//]: # "}" +[//]: # "console.log(str)" ++#### [Node.js](#tab/nodejs) Requests/Dependencies - [http/https](https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/opentelemetry-instrumentation-http/README.md) version: Dependencies #### [Python](#tab/python) Requests-- [Django](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-django) (1) version:+- [Django](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-django) <sup>[1](#FOOTNOTEONE)</sup> version: [0.34b0](https://pypi.org/project/opentelemetry-instrumentation-django/0.34b0/)-- [Flask](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-flask) (1) version:+- [Flask](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-flask) <sup>[1](#FOOTNOTEONE)</sup> version: [0.34b0](https://pypi.org/project/opentelemetry-instrumentation-flask/0.34b0/) Dependencies - [Psycopg2](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-psycopg2) version: [0.34b0](https://pypi.org/project/opentelemetry-instrumentation-psycopg2/0.34b0/)-- [Requests](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-requests) (1) version:+- [Requests](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-requests) <sup>[1](#FOOTNOTEONE)</sup> version: [0.34b0](https://pypi.org/project/opentelemetry-instrumentation-requests/0.34b0/) -(1) Supports automatic reporting (as SpanEvent) of unhandled exceptions -- ### Metrics #### [.NET](#tab/net) Dependencies [1.0.0-rc9.7](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Http/1.0.0-rc9.7) - [Runtime](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.Runtime-1.0.0/src/OpenTelemetry.Instrumentation.Runtime/README.md) version: [1.0.0](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Runtime/1.0.0) -#### [Node.js (JavaScript)](#tab/nodejs-javascript) +#### [Java](#tab/java) -- [http/https](https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/opentelemetry-instrumentation-http/README.md) version:- [0.33.0](https://www.npmjs.com/package/@opentelemetry/instrumentation-http/v/0.33.0) +Autocollected metrics ++* Micrometer, including Spring Boot Actuator metrics +* JMX Metrics -#### [Node.js (TypeScript)](#tab/nodejs-typescript) +#### [Node.js](#tab/nodejs) - [http/https](https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/opentelemetry-instrumentation-http/README.md) version: [0.33.0](https://www.npmjs.com/package/@opentelemetry/instrumentation-http/v/0.33.0) Dependencies > [!TIP]-> The OpenTelemetry-based offerings currently emit all metrics as [Custom Metrics](#add-custom-metrics) in Metrics Explorer. Whatever you set as the meter name becomes the metrics namespace. --## Modify telemetry +> The OpenTelemetry-based offerings currently emit all metrics as [Custom Metrics](#add-custom-metrics) and [Performance Counters](standard-metrics.md#performance-counters) in Metrics Explorer. For .NET, Node.js, and Python, whatever you set as the meter name becomes the metrics namespace. -This section explains how to modify telemetry. +### Logs -### Add span attributes +#### [.NET](#tab/net) -To add span attributes, use either of the following two ways: +Coming soon. -* Use options provided by [instrumentation libraries](#instrumentation-libraries). -* Add a custom span processor. +#### [Java](#tab/java) -These attributes might include adding a custom property to your telemetry. You might also use attributes to set optional fields in the Application Insights schema, like Client IP. +Autocollected logs -> [!TIP] -> The advantage of using options provided by instrumentation libraries, when they're available, is that the entire context is available. As a result, users can select to add or filter more attributes. For example, the enrich option in the HttpClient instrumentation library gives users access to the httpRequestMessage itself. They can select anything from it and store it as an attribute. +* Logback <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup> (including MDC properties) +* Log4j <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup> (including MDC/Thread Context properties) +* JBoss Logging <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup> (including MDC properties) +* java.util.logging <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup> -#### Add a custom property to a Trace +#### [Node.js](#tab/nodejs) -Any [attributes](#add-span-attributes) you add to spans are exported as custom properties. They populate the _customDimensions_ field in the requests or the dependencies tables in Application Insights. +Coming soon. -##### [.NET](#tab/net) +#### [Python](#tab/python) -1. Many instrumentation libraries provide an enrich option. For guidance, see the readme files of individual instrumentation libraries: - - [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.AspNet-1.0.0-rc9.6/src/OpenTelemetry.Instrumentation.AspNet/README.md#enrich) - - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#enrich) - - [HttpClient and HttpWebRequest](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.Http/README.md#enrich) +Coming soon. -1. Use a custom processor: + -> [!TIP] -> Add the processor shown here *before* the Azure Monitor Exporter. +**Footnotes** +- <a name="FOOTNOTEONE">1</a>: Supports automatic reporting of unhandled exceptions +- <a name="FOOTNOTETWO">2</a>: By default, logging is only collected when that logging is performed at the INFO level or higher. To change this level, see the [configuration options](./java-standalone-config.md#auto-collected-logging). -```csharp -using var tracerProvider = Sdk.CreateTracerProviderBuilder() - .AddSource("OTel.AzureMonitor.Demo") - .AddProcessor(new ActivityEnrichingProcessor()) - .AddAzureMonitorTraceExporter(o => - { - o.ConnectionString = "<Your Connection String>" - }) - .Build(); -``` +## Collect custom telemetry -Add `ActivityEnrichingProcessor.cs` to your project with the following code: +This section explains how to collect custom telemetry from your application. + +Depending on your language and signal type, there are different ways to collect custom telemetry, including: + +- OpenTelemetry API +- Language-specific logging/metrics libraries +- Application Insights Classic API + +The following table represents the currently supported custom telemetry types: ++| | Custom Events | Custom Metrics | Dependencies | Exceptions | Page Views | Requests | Traces | +|-||-|--|||-|--| +| **.NET** | | | | | | | | +| OpenTelemetry API | | | Yes | Yes | | Yes | | +| iLogger API | | | | | | | Yes | +| AI Classic API | | | | | | | | +| | | | | | | | | +| **Java** | | | | | | | | +| OpenTelemetry API | | Yes | Yes | Yes | | Yes | | +| Logback, Log4j, JUL | | | | Yes | | | Yes | +| Micrometer | | Yes | | | | | | +| AI Classic API | Yes | Yes | Yes | Yes | Yes | Yes | Yes | +| | | | | | | | | +| **Node.js** | | | | | | | | +| OpenTelemetry API | | Yes | Yes | Yes | | Yes | | +| Winston, Pino, Bunyan | | | | | | | Yes | +| AI Classic API | Yes | Yes | Yes | Yes | Yes | Yes | Yes | +| | | | | | | | | +| **Python** | | | | | | | | +| OpenTelemetry API | | | | | | | | +| Python Logging Module | | | | | | | | +| AI Classic API | | | | | | | | -```csharp -using System.Diagnostics; -using OpenTelemetry; -using OpenTelemetry.Trace; +> [!NOTE] +> Application Insights Java 3.x listens for telemetry that's sent to the Application Insights Classic API. Similarly, Application Insights Node.js 3.x collects events created with the Application Insights Classic API. This makes upgrading easier and fills a gap in our custom telemetry support until all custom telemetry types are supported via the OpenTelemetry API. -public class ActivityEnrichingProcessor : BaseProcessor<Activity> -{ - public override void OnEnd(Activity activity) - { - // The updated activity will be available to all processors which are called after this processor. - activity.DisplayName = "Updated-" + activity.DisplayName; - activity.SetTag("CustomDimension1", "Value1"); - activity.SetTag("CustomDimension2", "Value2"); - } -} -``` +### Add Custom Metrics -##### [Node.js (JavaScript)](#tab/nodejs-javascript) +> [!NOTE] +> Custom Metrics are under preview in Azure Monitor Application Insights. Custom metrics without dimensions are available by default. To view and alert on dimensions, you need to [opt-in](pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation). -Use a custom processor: +You may want to collect metrics beyond what is collected by [instrumentation libraries](#instrumentation-libraries). -> [!TIP] -> Add the processor shown here *before* the Azure Monitor Exporter. +The OpenTelemetry API offers six metric "instruments" to cover various metric scenarios and you'll need to pick the correct "Aggregation Type" when visualizing metrics in Metrics Explorer. This requirement is true when using the OpenTelemetry Metric API to send metrics and when using an instrumentation library. -```javascript -const { AzureMonitorTraceExporter } = require("@azure/monitor-opentelemetry-exporter"); -const { NodeTracerProvider } = require("@opentelemetry/sdk-trace-node"); -const { SimpleSpanProcessor } = require("@opentelemetry/sdk-trace-base"); +The following table shows the recommended [aggregation types](../essentials/metrics-aggregation-explained.md#aggregation-types) for each of the OpenTelemetry Metric Instruments. -class SpanEnrichingProcessor { - forceFlush() { - return Promise.resolve(); - } - shutdown() { - return Promise.resolve(); - } - onStart(_span){} - onEnd(span){ - span.attributes["CustomDimension1"] = "value1"; - span.attributes["CustomDimension2"] = "value2"; - } -} +| OpenTelemetry Instrument | Azure Monitor Aggregation Type | +||| +| Counter | Sum | +| Asynchronous Counter | Sum | +| Histogram | Min, Max, Average, Sum and Count | +| Asynchronous Gauge | Average | +| UpDownCounter | Sum | +| Asynchronous UpDownCounter | Sum | -const provider = new NodeTracerProvider(); -const azureExporter = new AzureMonitorTraceExporter({ - connectionString: "<Your Connection String>" -}); +> [!CAUTION] +> Aggregation types beyond what's shown in the table typically aren't meaningful. -provider.addSpanProcessor(new SpanEnrichingProcessor()); -provider.addSpanProcessor(new SimpleSpanProcessor(azureExporter)); -``` +The [OpenTelemetry Specification](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md#instrument) +describes the instruments and provides examples of when you might use each one. ++> [!TIP] +> The histogram is the most versatile and most closely equivalent to the Application Insights Track Metric Classic API. Azure Monitor currently flattens the histogram instrument into our five supported aggregation types, and support for percentiles is underway. Although less versatile, other OpenTelemetry instruments have a lesser impact on your application's performance. -##### [Node.js (TypeScript)](#tab/nodejs-typescript) +#### Histogram Example -Use a custom processor: +#### [.NET](#tab/net) -> [!TIP] -> Add the processor shown here *before* the Azure Monitor Exporter. +```csharp +using System.Diagnostics.Metrics; +using Azure.Monitor.OpenTelemetry.Exporter; +using OpenTelemetry; +using OpenTelemetry.Metrics; -```typescript -import { AzureMonitorTraceExporter } from "@azure/monitor-opentelemetry-exporter"; -import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node"; -import { ReadableSpan, SimpleSpanProcessor, Span, SpanProcessor } from "@opentelemetry/sdk-trace-base"; +public class Program +{ + private static readonly Meter meter = new("OTel.AzureMonitor.Demo"); -class SpanEnrichingProcessor implements SpanProcessor { - forceFlush(): Promise<void>{ - return Promise.resolve(); - } - shutdown(): Promise<void>{ - return Promise.resolve(); - } - onStart(_span: Span): void{} - onEnd(span: ReadableSpan){ - span.attributes["CustomDimension1"] = "value1"; - span.attributes["CustomDimension2"] = "value2"; + public static void Main() + { + using var meterProvider = Sdk.CreateMeterProviderBuilder() + .AddMeter("OTel.AzureMonitor.Demo") + .AddAzureMonitorMetricExporter(o => + { + o.ConnectionString = "<Your Connection String>"; + }) + .Build(); ++ Histogram<long> myFruitSalePrice = meter.CreateHistogram<long>("FruitSalePrice"); ++ var rand = new Random(); + myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "red")); + myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow")); + myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow")); + myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "green")); + myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "red")); + myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow")); ++ System.Console.WriteLine("Press Enter key to exit."); + System.Console.ReadLine(); } }+``` -const provider = new NodeTracerProvider(); -const azureExporter = new AzureMonitorTraceExporter({ - connectionString: "<Your Connection String>" -}); +#### [Java](#tab/java) -provider.addSpanProcessor(new SpanEnrichingProcessor()); -provider.addSpanProcessor(new SimpleSpanProcessor(azureExporter)); -``` +Coming soon. -##### [Python](#tab/python) +#### [Node.js](#tab/nodejs) -Use a custom processor: + ```javascript + const { + MeterProvider, + PeriodicExportingMetricReader, + } = require("@opentelemetry/sdk-metrics"); + const { + AzureMonitorMetricExporter, + } = require("@azure/monitor-opentelemetry-exporter"); -> [!TIP] -> Add the processor shown here *before* the Azure Monitor Exporter. + const provider = new MeterProvider(); + const exporter = new AzureMonitorMetricExporter({ + connectionString: "<Your Connection String>", + }); -```python -... -from opentelemetry.sdk.trace import TracerProvider -from opentelemetry.sdk.trace.export import BatchSpanProcessor + const metricReader = new PeriodicExportingMetricReader({ + exporter: exporter, + }); -trace.set_tracer_provider(TracerProvider()) -span_processor = BatchSpanProcessor(exporter) -span_enrich_processor = SpanEnrichingProcessor() -trace.get_tracer_provider().add_span_processor(span_enrich_processor) -trace.get_tracer_provider().add_span_processor(span_processor) -... + provider.addMetricReader(metricReader); ++ const meter = provider.getMeter("OTel.AzureMonitor.Demo"); + let histogram = meter.createHistogram("histogram"); ++ histogram.record(1, { testKey: "testValue" }); + histogram.record(30, { testKey: "testValue2" }); + histogram.record(100, { testKey2: "testValue" }); ``` -Add `SpanEnrichingProcessor.py` to your project with the following code: +#### [Python](#tab/python) ```python-from opentelemetry.sdk.trace import SpanProcessor +from opentelemetry import metrics +from opentelemetry.sdk.metrics import MeterProvider +from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader -class SpanEnrichingProcessor(SpanProcessor): +from azure.monitor.opentelemetry.exporter import AzureMonitorMetricExporter - def on_end(self, span): - span._name = "Updated-" + span.name - span._attributes["CustomDimension1"] = "Value1" - span._attributes["CustomDimension2"] = "Value2" -``` -+exporter = AzureMonitorMetricExporter(connection_string="<your-connection-string") +reader = PeriodicExportingMetricReader(exporter) +metrics.set_meter_provider(MeterProvider(metric_readers=[reader])) +meter = metrics.get_meter_provider().get_meter("otel_azure_monitor_histogram_demo") -#### Set the user IP +histogram = meter.create_histogram("histogram") +histogram.record(1.0, {"test_key": "test_value"}) +histogram.record(100.0, {"test_key2": "test_value"}) +histogram.record(30.0, {"test_key": "test_value2"}) -You can populate the _client_IP_ field for requests by setting the `http.client_ip` attribute on the span. Application Insights uses the IP address to generate user location attributes and then [discards it by default](ip-collection.md#default-behavior). +input() +``` -##### [.NET](#tab/net) + -Use the add [custom property example](#add-a-custom-property-to-a-trace), but replace the following lines of code in `ActivityEnrichingProcessor.cs`: +#### Counter Example -```C# -// only applicable in case of activity.Kind == Server -activity.SetTag("http.client_ip", "<IP Address>"); -``` +#### [.NET](#tab/net) ++```csharp +using System.Diagnostics.Metrics; +using Azure.Monitor.OpenTelemetry.Exporter; +using OpenTelemetry; +using OpenTelemetry.Metrics; -##### [Node.js (JavaScript)](#tab/nodejs-javascript) +public class Program +{ + private static readonly Meter meter = new("OTel.AzureMonitor.Demo"); -Use the add [custom property example](#add-a-custom-property-to-a-trace), but replace the following lines of code: + public static void Main() + { + using var meterProvider = Sdk.CreateMeterProviderBuilder() + .AddMeter("OTel.AzureMonitor.Demo") + .AddAzureMonitorMetricExporter(o => + { + o.ConnectionString = "<Your Connection String>"; + }) + .Build(); -```javascript -... -const { SemanticAttributes } = require("@opentelemetry/semantic-conventions"); + Counter<long> myFruitCounter = meter.CreateCounter<long>("MyFruitCounter"); -class SpanEnrichingProcessor { - ... + myFruitCounter.Add(1, new("name", "apple"), new("color", "red")); + myFruitCounter.Add(2, new("name", "lemon"), new("color", "yellow")); + myFruitCounter.Add(1, new("name", "lemon"), new("color", "yellow")); + myFruitCounter.Add(2, new("name", "apple"), new("color", "green")); + myFruitCounter.Add(5, new("name", "apple"), new("color", "red")); + myFruitCounter.Add(4, new("name", "lemon"), new("color", "yellow")); - onEnd(span){ - span.attributes[SemanticAttributes.HTTP_CLIENT_IP] = "<IP Address>"; + System.Console.WriteLine("Press Enter key to exit."); + System.Console.ReadLine(); } } ``` -##### [Node.js (TypeScript)](#tab/nodejs-typescript) +#### [Java](#tab/java) -Use the add [custom property example](#add-a-custom-property-to-a-trace), but replace the following lines of code: +Coming soon. -```typescript -... -import { SemanticAttributes } from "@opentelemetry/semantic-conventions"; +#### [Node.js](#tab/nodejs) -class SpanEnrichingProcessor implements SpanProcessor{ - ... +```javascript + const { + MeterProvider, + PeriodicExportingMetricReader, + } = require("@opentelemetry/sdk-metrics"); + const { AzureMonitorMetricExporter } = require("@azure/monitor-opentelemetry-exporter"); - onEnd(span: ReadableSpan){ - span.attributes[SemanticAttributes.HTTP_CLIENT_IP] = "<IP Address>"; - } -} + const provider = new MeterProvider(); + const exporter = new AzureMonitorMetricExporter({ + connectionString: "<Your Connection String>", + }); + const metricReader = new PeriodicExportingMetricReader({ + exporter: exporter, + }); + provider.addMetricReader(metricReader); + const meter = provider.getMeter("OTel.AzureMonitor.Demo"); + let counter = meter.createCounter("counter"); + counter.add(1, { "testKey": "testValue" }); + counter.add(5, { "testKey2": "testValue" }); + counter.add(3, { "testKey": "testValue2" }); ``` -##### [Python](#tab/python) --Use the add [custom property example](#add-a-custom-property-to-a-trace), but replace the following lines of code in `SpanEnrichingProcessor.py`: +#### [Python](#tab/python) ```python-span._attributes["http.client_ip"] = "<IP Address>" -``` +from opentelemetry import metrics +from opentelemetry.sdk.metrics import MeterProvider +from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader --<!-- +from azure.monitor.opentelemetry.exporter import AzureMonitorMetricExporter -#### Set the user ID or authenticated user ID +exporter = AzureMonitorMetricExporter(connection_string="<your-connection-string") +reader = PeriodicExportingMetricReader(exporter) +metrics.set_meter_provider(MeterProvider(metric_readers=[reader])) +meter = metrics.get_meter_provider().get_meter("otel_azure_monitor_counter_demo") -You can populate the _user_Id_ or _user_Authenticatedid_ field for requests by setting the `xyz` or `xyz` attribute on the span. User ID is an anonymous user identifier. Authenticated User ID is a known user identifier. +counter = meter.create_counter("counter") +counter.add(1.0, {"test_key": "test_value"}) +counter.add(5.0, {"test_key2": "test_value"}) +counter.add(3.0, {"test_key": "test_value2"}) -> [!IMPORTANT] -> Consult applicable privacy laws before you set the Authenticated User ID. +input() +``` -##### [.NET](#tab/net) + -Use the add [custom property example](#add-a-custom-property-to-a-trace), but replace the following lines of code: +#### Gauge Example -```C# -Placeholder -``` +#### [.NET](#tab/net) -##### [Node.js](#tab/nodejs) +```csharp +using System.Diagnostics.Metrics; +using Azure.Monitor.OpenTelemetry.Exporter; +using OpenTelemetry; +using OpenTelemetry.Metrics; -Use the add [custom property example](#add-a-custom-property-to-a-trace), but replace the following lines of code: +public class Program +{ + private static readonly Meter meter = new("OTel.AzureMonitor.Demo"); -```typescript -... -import { SemanticAttributes } from "@opentelemetry/semantic-conventions"; + public static void Main() + { + using var meterProvider = Sdk.CreateMeterProviderBuilder() + .AddMeter("OTel.AzureMonitor.Demo") + .AddAzureMonitorMetricExporter(o => + { + o.ConnectionString = "<Your Connection String>"; + }) + .Build(); -class SpanEnrichingProcessor implements SpanProcessor{ - ... + var process = Process.GetCurrentProcess(); + + ObservableGauge<int> myObservableGauge = meter.CreateObservableGauge("Thread.State", () => GetThreadState(process)); - onEnd(span: ReadableSpan){ - span.attributes[SemanticAttributes.ENDUSER_ID] = "<User ID>"; + System.Console.WriteLine("Press Enter key to exit."); + System.Console.ReadLine(); + } + + private static IEnumerable<Measurement<int>> GetThreadState(Process process) + { + foreach (ProcessThread thread in process.Threads) + { + yield return new((int)thread.ThreadState, new("ProcessId", process.Id), new("ThreadId", thread.Id)); + } } } ``` -##### [Python](#tab/python) +#### [Java](#tab/java) ++Coming soon. -Use the add [custom property example](#add-a-custom-property-to-a-trace), but replace the following lines of code: +#### [Node.js](#tab/nodejs) ++```javascript + const { + MeterProvider, + PeriodicExportingMetricReader + } = require("@opentelemetry/sdk-metrics"); + const { AzureMonitorMetricExporter } = require("@azure/monitor-opentelemetry-exporter"); ++ const provider = new MeterProvider(); + const exporter = new AzureMonitorMetricExporter({ + connectionString: + connectionString: "<Your Connection String>", + }); + const metricReader = new PeriodicExportingMetricReader({ + exporter: exporter + }); + provider.addMetricReader(metricReader); + const meter = provider.getMeter("OTel.AzureMonitor.Demo"); + let gauge = meter.createObservableGauge("gauge"); + gauge.addCallback((observableResult) => { + let randomNumber = Math.floor(Math.random() * 100); + observableResult.observe(randomNumber, {"testKey": "testValue"}); + }); +``` ++#### [Python](#tab/python) ```python-span._attributes["enduser.id"] = "<User ID>" +from typing import Iterable ++from opentelemetry import metrics +from opentelemetry.metrics import CallbackOptions, Observation +from opentelemetry.sdk.metrics import MeterProvider +from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader ++from azure.monitor.opentelemetry.exporter import AzureMonitorMetricExporter ++exporter = AzureMonitorMetricExporter(connection_string="<your-connection-string") +reader = PeriodicExportingMetricReader(exporter) +metrics.set_meter_provider(MeterProvider(metric_readers=[reader])) +meter = metrics.get_meter_provider().get_meter("otel_azure_monitor_gauge_demo") ++def observable_gauge_generator(options: CallbackOptions) -> Iterable[Observation]: + yield Observation(9, {"test_key": "test_value"}) ++def observable_gauge_sequence(options: CallbackOptions) -> Iterable[Observation]: + observations = [] + for i in range(10): + observations.append( + Observation(9, {"test_key": i}) + ) + return observations ++gauge = meter.create_observable_gauge("gauge", [observable_gauge_generator]) +gauge2 = meter.create_observable_gauge("gauge2", [observable_gauge_sequence]) ++input() ``` > -### Filter telemetry +### Add Custom Exceptions -You might use the following ways to filter out telemetry before it leaves your application. +Select instrumentation libraries automatically report exceptions to Application Insights. +However, you may want to manually report exceptions beyond what instrumentation libraries report. +For instance, exceptions caught by your code aren't ordinarily reported. You may wish to report them +to draw attention in relevant experiences including the failures section and end-to-end transaction views. #### [.NET](#tab/net) -1. Many instrumentation libraries provide a filter option. For guidance, see the readme files of individual instrumentation libraries: - - [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.AspNet-1.0.0-rc9.6/src/OpenTelemetry.Instrumentation.AspNet/README.md#filter) - - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#filter) - - [HttpClient and HttpWebRequest](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.Http/README.md#filter) --1. Use a custom processor: - - ```csharp - using var tracerProvider = Sdk.CreateTracerProviderBuilder() - .AddSource("OTel.AzureMonitor.Demo") - .AddProcessor(new ActivityFilteringProcessor()) - .AddAzureMonitorTraceExporter(o => - { - o.ConnectionString = "<Your Connection String>" - }) - .Build(); - ``` - - Add `ActivityFilteringProcessor.cs` to your project with the following code: - - ```csharp - using System.Diagnostics; - using OpenTelemetry; - using OpenTelemetry.Trace; - - public class ActivityFilteringProcessor : BaseProcessor<Activity> +```csharp +using (var activity = activitySource.StartActivity("ExceptionExample")) +{ + try {- public override void OnStart(Activity activity) - { - // prevents all exporters from exporting internal activities - if (activity.Kind == ActivityKind.Internal) - { - activity.IsAllDataRequested = false; - } - } + throw new Exception("Test exception"); }- ``` + catch (Exception ex) + { + activity?.SetStatus(ActivityStatusCode.Error); + activity?.RecordException(ex); + } +} +``` -1. If a particular source isn't explicitly added by using `AddSource("ActivitySourceName")`, then none of the activities created by using that source will be exported. +#### [Java](#tab/java) -#### [Node.js (JavaScript)](#tab/nodejs-javascript) +You can use `opentelemetry-api` to update the status of a span and record exceptions. -1. Exclude the URL option provided by many HTTP instrumentation libraries. +1. Add `opentelemetry-api-1.0.0.jar` (or later) to your application: - The following example shows how to exclude a certain URL from being tracked by using the [HTTP/HTTPS instrumentation library](https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/opentelemetry-instrumentation-http): - - ```javascript - const { registerInstrumentations } = require( "@opentelemetry/instrumentation"); - const { HttpInstrumentation } = require( "@opentelemetry/instrumentation-http"); - const { NodeTracerProvider } = require( "@opentelemetry/sdk-trace-node"); + ```xml + <dependency> + <groupId>io.opentelemetry.instrumentation</groupId> + <artifactId>opentelemetry-api</artifactId> + <version>1.0.0</version> + </dependency> + ``` - const httpInstrumentationConfig = { - ignoreIncomingRequestHook: (request) => { - // Ignore OPTIONS incoming requests - if (request.method === 'OPTIONS') { - return true; - } - return false; - }, - ignoreOutgoingRequestHook: (options) => { - // Ignore outgoing requests with /test path - if (options.path === '/test') { - return true; - } - return false; - } - }; +1. Set status to `error` and record an exception in your code: - const httpInstrumentation = new HttpInstrumentation(httpInstrumentationConfig); - const provider = new NodeTracerProvider(); - provider.register(); + ```java + import io.opentelemetry.api.trace.Span; + import io.opentelemetry.api.trace.StatusCode; - registerInstrumentations({ - instrumentations: [ - httpInstrumentation, - ] - }); - ``` + Span span = Span.current(); + span.setStatus(StatusCode.ERROR, "errorMessage"); + span.recordException(e); + ``` -2. Use a custom processor. You can use a custom span processor to exclude certain spans from being exported. To mark spans to not be exported, set `TraceFlag` to `DEFAULT`. -Use the add [custom property example](#add-a-custom-property-to-a-trace), but replace the following lines of code: +#### [Node.js](#tab/nodejs) - ```javascript - const { SpanKind, TraceFlags } = require("@opentelemetry/api"); +```javascript +const { trace } = require("@opentelemetry/api"); +const { BasicTracerProvider, SimpleSpanProcessor } = require("@opentelemetry/sdk-trace-base"); +const { AzureMonitorTraceExporter } = require("@azure/monitor-opentelemetry-exporter"); - class SpanEnrichingProcessor { - ... +const provider = new BasicTracerProvider(); +const exporter = new AzureMonitorTraceExporter({ + connectionString: "<Your Connection String>", +}); +provider.addSpanProcessor(new SimpleSpanProcessor(exporter)); +provider.register(); +const tracer = trace.getTracer("example-basic-tracer-node"); +let span = tracer.startSpan("hello"); +try{ + throw new Error("Test Error"); +} +catch(error){ + span.recordException(error); +} +``` - onEnd(span) { - if(span.kind == SpanKind.INTERNAL){ - span.spanContext().traceFlags = TraceFlags.NONE; - } - } +#### [Python](#tab/python) ++The OpenTelemetry Python SDK is implemented in such a way that exceptions thrown will automatically be captured and recorded. See below for an example of this behavior. ++```python +from opentelemetry import trace +from opentelemetry.sdk.trace import TracerProvider +from opentelemetry.sdk.trace.export import BatchSpanProcessor ++from azure.monitor.opentelemetry.exporter import AzureMonitorTraceExporter ++exporter = AzureMonitorTraceExporter(connection_string="<your-connection-string>") ++trace.set_tracer_provider(TracerProvider()) +tracer = trace.get_tracer("otel_azure_monitor_exception_demo") +span_processor = BatchSpanProcessor(exporter) +trace.get_tracer_provider().add_span_processor(span_processor) ++# Exception events +try: + with tracer.start_as_current_span("hello") as span: + # This exception will be automatically recorded + raise Exception("Custom exception message.") +except Exception: + print("Exception raised") ++``` ++If you would like to record exceptions manually, you can disable that option when creating the span as show below. ++```python +... +with tracer.start_as_current_span("hello", record_exception=False) as span: + try: + raise Exception("Custom exception message.") + except Exception as ex: + # Manually record exception + span.record_exception(ex) +... ++``` ++++### Add Custom Spans ++You may want to add a custom span when there's a dependency request that's not already collected by an instrumentation library or an application process that you wish to model as a span on the end-to-end transaction view. + +#### [.NET](#tab/net) + +Coming soon. + +#### [Java](#tab/java) + +#### Use the OpenTelemetry annotation ++The simplest way to add your own spans is by using OpenTelemetry's `@WithSpan` annotation. ++Spans populate the `requests` and `dependencies` tables in Application Insights. ++1. Add `opentelemetry-instrumentation-annotations-1.21.0.jar` (or later) to your application: ++ ```xml + <dependency> + <groupId>io.opentelemetry.instrumentation</groupId> + <artifactId>opentelemetry-instrumentation-annotations</artifactId> + <version>1.21.0</version> + </dependency> + ``` ++1. Use the `@WithSpan` annotation to emit a span each time your method is executed: ++ ```java + import io.opentelemetry.instrumentation.annotations.WithSpan; ++ @WithSpan(value = "your span name") + public void yourMethod() { }- ``` + ``` -#### [Node.js (TypeScript)](#tab/nodejs-typescript) +By default, the span will end up in the `dependencies` table with dependency type `InProc`. -1. Exclude the URL option provided by many HTTP instrumentation libraries. +If your method represents a background job that isn't already captured by auto-instrumentation, +we recommend that you apply the attribute `kind = SpanKind.SERVER` to the `@WithSpan` annotation +so that it will end up in the Application Insights `requests` table. - The following example shows how to exclude a certain URL from being tracked by using the [HTTP/HTTPS instrumentation library](https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/opentelemetry-instrumentation-http): - - ```typescript - import { IncomingMessage } from "http"; - import { RequestOptions } from "https"; - import { registerInstrumentations } from "@opentelemetry/instrumentation"; - import { HttpInstrumentation, HttpInstrumentationConfig } from "@opentelemetry/instrumentation-http"; - import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node"; -- const httpInstrumentationConfig: HttpInstrumentationConfig = { - ignoreIncomingRequestHook: (request: IncomingMessage) => { - // Ignore OPTIONS incoming requests - if (request.method === 'OPTIONS') { - return true; - } - return false; - }, - ignoreOutgoingRequestHook: (options: RequestOptions) => { - // Ignore outgoing requests with /test path - if (options.path === '/test') { - return true; - } - return false; - } - }; - const httpInstrumentation = new HttpInstrumentation(httpInstrumentationConfig); - const provider = new NodeTracerProvider(); - provider.register(); - registerInstrumentations({ - instrumentations: [ - httpInstrumentation, - ] - }); - - ``` +#### Use the OpenTelemetry API -2. Use a custom processor. You can use a custom span processor to exclude certain spans from being exported. To mark spans to not be exported, set `TraceFlag` to `DEFAULT`. -Use the add [custom property example](#add-a-custom-property-to-a-trace), but replace the following lines of code: +If the preceding OpenTelemetry `@WithSpan` annotation doesn't meet your needs, +you can add your spans by using the OpenTelemetry API. - ```typescript - ... - import { SpanKind, TraceFlags } from "@opentelemetry/api"; - - class SpanEnrichingProcessor implements SpanProcessor{ - ... - - onEnd(span: ReadableSpan) { - if(span.kind == SpanKind.INTERNAL){ - span.spanContext().traceFlags = TraceFlags.NONE; - } - } +1. Add `opentelemetry-api-1.0.0.jar` (or later) to your application: ++ ```xml + <dependency> + <groupId>io.opentelemetry.instrumentation</groupId> + <artifactId>opentelemetry-api</artifactId> + <version>1.0.0</version> + </dependency> + ``` ++1. Use the `GlobalOpenTelemetry` class to create a `Tracer`: ++ ```java + import io.opentelemetry.api.GlobalOpenTelemetry; + import io.opentelemetry.api.trace.Tracer; ++ static final Tracer tracer = GlobalOpenTelemetry.getTracer("com.example"); + ``` ++1. Create a span, make it current, and then end it: ++ ```java + Span span = tracer.spanBuilder("my first span").startSpan(); + try (Scope ignored = span.makeCurrent()) { + // do stuff within the context of this + } catch (Throwable t) { + span.recordException(t); + } finally { + span.end(); }- + ``` ++#### [Node.js](#tab/nodejs) ++Coming soon. + #### [Python](#tab/python) -1. Exclude the URL option provided by many HTTP instrumentation libraries. +Coming soon. - The following example shows how to exclude a certain URL from being tracked by using the [Flask](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-flask) instrumentation: - - ```python - ... - import flask - - from opentelemetry.instrumentation.flask import FlaskInstrumentor - - # You might also populate OTEL_PYTHON_FLASK_EXCLUDED_URLS env variable - # List will consist of comma delimited regexes representing which URLs to exclude - excluded_urls = "client/.*/info,healthcheck" - - FlaskInstrumentor().instrument(excluded_urls=excluded_urls) # Do this before flask.Flask - app = flask.Flask(__name__) - ... - ``` + -1. Use a custom processor. You can use a custom span processor to exclude certain spans from being exported. To mark spans to not be exported, set `TraceFlag` to `DEFAULT`. - - ```python - ... - from opentelemetry.sdk.trace import TracerProvider - from opentelemetry.sdk.trace.export import BatchSpanProcessor +<!-- ++### Add Custom Events ++#### Span Events ++The OpenTelemetry Logs/Events API is still under development. In the meantime, you can use the OpenTelemetry Span API to create "Span Events", which populate the traces table in Application Insights. The string passed in to addEvent() is saved to the message field within the trace. ++> [!CAUTION] +> Span Events are only recommended for when you need additional diagnostic metadata associated with your span. For other scenarios, such as describing business events, we recommend you wait for the release of the OpenTelemetry Events API. ++#### [.NET](#tab/net) + +Coming soon. + +#### [Java](#tab/java) ++You can use `opentelemetry-api` to create span events, which populate the `traces` table in Application Insights. The string passed in to `addEvent()` is saved to the `message` field within the trace. ++1. Add `opentelemetry-api-1.0.0.jar` (or later) to your application: ++ ```xml + <dependency> + <groupId>io.opentelemetry.instrumentation</groupId> + <artifactId>opentelemetry-api</artifactId> + <version>1.0.0</version> + </dependency> + ``` ++1. Add span events in your code: ++ ```java + import io.opentelemetry.api.trace.Span; ++ Span.current().addEvent("eventName"); + ``` ++#### [Node.js](#tab/nodejs) ++Coming soon. + +#### [Python](#tab/python) ++Coming soon. ++++--> + +### Send custom telemetry using the Application Insights Classic API + +We recommend you use the OpenTelemetry APIs whenever possible, but there may be some scenarios when you have to use the Application Insights Classic APIs. + +#### [.NET](#tab/net) + +This is not available in .NET. ++#### [Java](#tab/java) ++1. Add `applicationinsights-core` to your application: ++ ```xml + <dependency> + <groupId>com.microsoft.azure</groupId> + <artifactId>applicationinsights-core</artifactId> + <version>3.4.8</version> + </dependency> + ``` ++1. Create a `TelemetryClient` instance: - trace.set_tracer_provider(TracerProvider()) - span_processor = BatchSpanProcessor(exporter) - span_filter_processor = SpanFilteringProcessor() - trace.get_tracer_provider().add_span_processor(span_filter_processor) - trace.get_tracer_provider().add_span_processor(span_processor) - ... + ```java + static final TelemetryClient telemetryClient = new TelemetryClient(); ```++1. Use the client to send custom telemetry: ++ ##### Events - Add `SpanFilteringProcessor.py` to your project with the following code: + ```java + telemetryClient.trackEvent("WinGame"); + ``` - ```python - from opentelemetry.trace import SpanContext, SpanKind, TraceFlags - from opentelemetry.sdk.trace import SpanProcessor + ##### Metrics - class SpanFilteringProcessor(SpanProcessor): + ```java + telemetryClient.trackMetric("queueLength", 42.0); + ``` - # prevents exporting spans from internal activities - def on_start(self, span): - if span._kind is SpanKind.INTERNAL: - span._context = SpanContext( - span.context.trace_id, - span.context.span_id, - span.context.is_remote, - TraceFlags.DEFAULT, - span.context.trace_state, - ) + ##### Dependencies + ```java + boolean success = false; + long startTime = System.currentTimeMillis(); + try { + success = dependency.call(); + } finally { + long endTime = System.currentTimeMillis(); + RemoteDependencyTelemetry telemetry = new RemoteDependencyTelemetry(); + telemetry.setSuccess(success); + telemetry.setTimestamp(new Date(startTime)); + telemetry.setDuration(new Duration(endTime - startTime)); + telemetryClient.trackDependency(telemetry); + } ``` - <!-- For more information, see [GitHub Repo](link). --> - <! - ### Get the trace ID or span ID - You might use X or Y to get the trace ID or span ID. Adding a trace ID or span ID to existing logging telemetry enables better correlation when you debug and diagnose issues. + ##### Logs - > [!NOTE] - > If you manually create spans for log-based metrics and alerting, you need to update them to use the metrics API (after it's released) to ensure accuracy. - - ```python - Placeholder + ```java + telemetryClient.trackTrace(message, SeverityLevel.Warning, properties); ``` - For more information, see [GitHub Repo](link). - > + ##### Exceptions + + ```java + try { + ... + } catch (Exception e) { + telemetryClient.trackException(e); + } + ``` ++#### [Node.js](#tab/nodejs) + +Coming soon. + +#### [Python](#tab/python) + +This is not available in Python. -## Custom telemetry +## Modify telemetry -This section explains how to collect custom telemetry from your application. +This section explains how to modify telemetry. -### Add Custom Metrics +### Add span attributes -> [!NOTE] -> Custom Metrics are under preview in Azure Monitor Application Insights. Custom metrics without dimensions are available by default. To view and alert on dimensions, you need to [opt-in](pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation). +These attributes might include adding a custom property to your telemetry. You might also use attributes to set optional fields in the Application Insights schema, like Client IP. -You may want to collect metrics beyond what is collected by [instrumentation libraries](#instrumentation-libraries). +#### Add a custom property to a Span -The OpenTelemetry API offers six metric "instruments" to cover various metric scenarios and you'll need to pick the correct "Aggregation Type" when visualizing metrics in Metrics Explorer. This requirement is true when using the OpenTelemetry Metric API to send metrics and when using an instrumentation library. +Any [attributes](#add-span-attributes) you add to spans are exported as custom properties. They populate the _customDimensions_ field in the requests, dependencies, traces, or exceptions table. -The following table shows the recommended [aggregation types](../essentials/metrics-aggregation-explained.md#aggregation-types) for each of the OpenTelemetry Metric Instruments. +##### [.NET](#tab/net) -| OpenTelemetry Instrument | Azure Monitor Aggregation Type | -||| -| Counter | Sum | -| Asynchronous Counter | Sum | -| Histogram | Min, Max, Average, Sum and Count | -| Asynchronous Gauge | Average | -| UpDownCounter | Sum | -| Asynchronous UpDownCounter | Sum | +To add span attributes, use either of the following two ways: -> [!CAUTION] -> Aggregation types beyond what's shown in the table typically aren't meaningful. +* Use options provided by [instrumentation libraries](#instrumentation-libraries). +* Add a custom span processor. -The [OpenTelemetry Specification](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md#instrument) -describes the instruments and provides examples of when you might use each one. +> [!TIP] +> The advantage of using options provided by instrumentation libraries, when they're available, is that the entire context is available. As a result, users can select to add or filter more attributes. For example, the enrich option in the HttpClient instrumentation library gives users access to the httpRequestMessage itself. They can select anything from it and store it as an attribute. ++1. Many instrumentation libraries provide an enrich option. For guidance, see the readme files of individual instrumentation libraries: + - [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.AspNet-1.0.0-rc9.6/src/OpenTelemetry.Instrumentation.AspNet/README.md#enrich) + - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#enrich) + - [HttpClient and HttpWebRequest](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.Http/README.md#enrich) ++1. Use a custom processor: > [!TIP]-> The histogram is the most versatile and most closely equivalent to the prior Application Insights Track Metric API. Azure Monitor currently flattens the histogram instrument into our five supported aggregation types, and support for percentiles is underway. Although less versatile, other OpenTelemetry instruments have a lesser impact on your application's performance. +> Add the processor shown here *before* the Azure Monitor Exporter. -#### Histogram Example +```csharp +using var tracerProvider = Sdk.CreateTracerProviderBuilder() + .AddSource("OTel.AzureMonitor.Demo") + .AddProcessor(new ActivityEnrichingProcessor()) + .AddAzureMonitorTraceExporter(o => + { + o.ConnectionString = "<Your Connection String>" + }) + .Build(); +``` -#### [.NET](#tab/net) +Add `ActivityEnrichingProcessor.cs` to your project with the following code: ```csharp-using System.Diagnostics.Metrics; -using Azure.Monitor.OpenTelemetry.Exporter; +using System.Diagnostics; using OpenTelemetry;-using OpenTelemetry.Metrics; +using OpenTelemetry.Trace; -public class Program +public class ActivityEnrichingProcessor : BaseProcessor<Activity> {- private static readonly Meter meter = new("OTel.AzureMonitor.Demo"); -- public static void Main() + public override void OnEnd(Activity activity) {- using var meterProvider = Sdk.CreateMeterProviderBuilder() - .AddMeter("OTel.AzureMonitor.Demo") - .AddAzureMonitorMetricExporter(o => - { - o.ConnectionString = "<Your Connection String>"; - }) - .Build(); + // The updated activity will be available to all processors which are called after this processor. + activity.DisplayName = "Updated-" + activity.DisplayName; + activity.SetTag("CustomDimension1", "Value1"); + activity.SetTag("CustomDimension2", "Value2"); + } +} +``` - Histogram<long> myFruitSalePrice = meter.CreateHistogram<long>("FruitSalePrice"); +##### [Java](#tab/java) - var rand = new Random(); - myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "red")); - myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow")); - myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow")); - myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "green")); - myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "red")); - myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow")); +You can use `opentelemetry-api` to add attributes to spans. - System.Console.WriteLine("Press Enter key to exit."); - System.Console.ReadLine(); +Adding one or more span attributes populates the `customDimensions` field in the `requests`, `dependencies`, `traces`, or `exceptions` table. ++1. Add `opentelemetry-api-1.0.0.jar` (or later) to your application: ++ ```xml + <dependency> + <groupId>io.opentelemetry.instrumentation</groupId> + <artifactId>opentelemetry-api</artifactId> + <version>1.0.0</version> + </dependency> + ``` ++1. Add custom dimensions in your code: ++ ```java + import io.opentelemetry.api.trace.Span; + import io.opentelemetry.api.common.AttributeKey; ++ AttributeKey attributeKey = AttributeKey.stringKey("mycustomdimension"); + Span.current().setAttribute(attributeKey, "myvalue1"); + ``` ++##### [Node.js](#tab/nodejs) ++Use a custom processor: ++> [!TIP] +> Add the processor shown here *before* the Azure Monitor Exporter. ++```javascript +const { AzureMonitorTraceExporter } = require("@azure/monitor-opentelemetry-exporter"); +const { NodeTracerProvider } = require("@opentelemetry/sdk-trace-node"); +const { SimpleSpanProcessor } = require("@opentelemetry/sdk-trace-base"); ++class SpanEnrichingProcessor { + forceFlush() { + return Promise.resolve(); + } + shutdown() { + return Promise.resolve(); + } + onStart(_span){} + onEnd(span){ + span.attributes["CustomDimension1"] = "value1"; + span.attributes["CustomDimension2"] = "value2"; } }++const provider = new NodeTracerProvider(); +const azureExporter = new AzureMonitorTraceExporter({ + connectionString: "<Your Connection String>" +}); ++provider.addSpanProcessor(new SpanEnrichingProcessor()); +provider.addSpanProcessor(new SimpleSpanProcessor(azureExporter)); ``` -#### [Node.js (JavaScript)](#tab/nodejs-javascript) +##### [Python](#tab/python) - ```javascript - const { - MeterProvider, - PeriodicExportingMetricReader, - } = require("@opentelemetry/sdk-metrics"); - const { - AzureMonitorMetricExporter, - } = require("@azure/monitor-opentelemetry-exporter"); +Use a custom processor: - const provider = new MeterProvider(); - const exporter = new AzureMonitorMetricExporter({ - connectionString: "<Your Connection String>", - }); +> [!TIP] +> Add the processor shown here *before* the Azure Monitor Exporter. - const metricReader = new PeriodicExportingMetricReader({ - exporter: exporter, - }); +```python +... +from opentelemetry.sdk.trace import TracerProvider +from opentelemetry.sdk.trace.export import BatchSpanProcessor - provider.addMetricReader(metricReader); +trace.set_tracer_provider(TracerProvider()) +span_processor = BatchSpanProcessor(exporter) +span_enrich_processor = SpanEnrichingProcessor() +trace.get_tracer_provider().add_span_processor(span_enrich_processor) +trace.get_tracer_provider().add_span_processor(span_processor) +... +``` - const meter = provider.getMeter("OTel.AzureMonitor.Demo"); - let histogram = meter.createHistogram("histogram"); +Add `SpanEnrichingProcessor.py` to your project with the following code: - histogram.record(1, { testKey: "testValue" }); - histogram.record(30, { testKey: "testValue2" }); - histogram.record(100, { testKey2: "testValue" }); +```python +from opentelemetry.sdk.trace import SpanProcessor ++class SpanEnrichingProcessor(SpanProcessor): ++ def on_end(self, span): + span._name = "Updated-" + span.name + span._attributes["CustomDimension1"] = "Value1" + span._attributes["CustomDimension2"] = "Value2" ```+ -#### [Node.js (TypeScript)](#tab/nodejs-typescript) +#### Set the user IP - ```typescript - import { - MeterProvider, - PeriodicExportingMetricReader, - PeriodicExportingMetricReaderOptions - } from "@opentelemetry/sdk-metrics"; - import { AzureMonitorMetricExporter } from "@azure/monitor-opentelemetry-exporter"; +You can populate the _client_IP_ field for requests by setting the `http.client_ip` attribute on the span. Application Insights uses the IP address to generate user location attributes and then [discards it by default](ip-collection.md#default-behavior). - const provider = new MeterProvider(); - const exporter = new AzureMonitorMetricExporter({ - connectionString: "<Your Connection String>", - }); - - const metricReaderOptions: PeriodicExportingMetricReaderOptions = { - exporter: exporter, - }; - const metricReader = new PeriodicExportingMetricReader(metricReaderOptions); +##### [.NET](#tab/net) - provider.addMetricReader(metricReader); +Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code in `ActivityEnrichingProcessor.cs`: - const meter = provider.getMeter("OTel.AzureMonitor.Demo"); - let histogram = meter.createHistogram("histogram"); +```C# +// only applicable in case of activity.Kind == Server +activity.SetTag("http.client_ip", "<IP Address>"); +``` ++##### [Java](#tab/java) ++Java automatically populates this field. ++##### [Node.js](#tab/nodejs) ++Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code: ++```javascript +... +const { SemanticAttributes } = require("@opentelemetry/semantic-conventions"); ++class SpanEnrichingProcessor { + ... ++ onEnd(span){ + span.attributes[SemanticAttributes.HTTP_CLIENT_IP] = "<IP Address>"; + } +} +``` ++##### [Python](#tab/python) ++Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code in `SpanEnrichingProcessor.py`: ++```python +span._attributes["http.client_ip"] = "<IP Address>" +``` ++++#### Set the user ID or authenticated user ID - histogram.record(1, { "testKey": "testValue" }); - histogram.record(30, { "testKey": "testValue2" }); - histogram.record(100, { "testKey2": "testValue" }); -``` +You can populate the _user_Id_ or _user_AuthenticatedId_ field for requests by using the guidance below. User ID is an anonymous user identifier. Authenticated User ID is a known user identifier. -#### [Python](#tab/python) +> [!IMPORTANT] +> Consult applicable privacy laws before you set the Authenticated User ID. -```python -from opentelemetry import metrics -from opentelemetry.sdk.metrics import MeterProvider -from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader +##### [.NET](#tab/net) -from azure.monitor.opentelemetry.exporter import AzureMonitorMetricExporter +Coming soon. -exporter = AzureMonitorMetricExporter(connection_string="<your-connection-string") -reader = PeriodicExportingMetricReader(exporter) -metrics.set_meter_provider(MeterProvider(metric_readers=[reader])) -meter = metrics.get_meter_provider().get_meter("otel_azure_monitor_histogram_demo") +##### [Java](#tab/java) -histogram = meter.create_histogram("histogram") -histogram.record(1.0, {"test_key": "test_value"}) -histogram.record(100.0, {"test_key2": "test_value"}) -histogram.record(30.0, {"test_key": "test_value2"}) +Populate the `user ID` field in the `requests`, `dependencies`, or `exceptions` table. -input() -``` +1. Add `opentelemetry-api-1.0.0.jar` (or later) to your application: -+ ```xml + <dependency> + <groupId>io.opentelemetry.instrumentation</groupId> + <artifactId>opentelemetry-api</artifactId> + <version>1.0.0</version> + </dependency> + ``` -#### Counter Example +1. Set `user_Id` in your code: -#### [.NET](#tab/net) + ```java + import io.opentelemetry.api.trace.Span; -```csharp -using System.Diagnostics.Metrics; -using Azure.Monitor.OpenTelemetry.Exporter; -using OpenTelemetry; -using OpenTelemetry.Metrics; + Span.current().setAttribute("enduser.id", "myuser"); + ``` -public class Program -{ - private static readonly Meter meter = new("OTel.AzureMonitor.Demo"); +#### [Node.js](#tab/nodejs) - public static void Main() - { - using var meterProvider = Sdk.CreateMeterProviderBuilder() - .AddMeter("OTel.AzureMonitor.Demo") - .AddAzureMonitorMetricExporter(o => - { - o.ConnectionString = "<Your Connection String>"; - }) - .Build(); +Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code: - Counter<long> myFruitCounter = meter.CreateCounter<long>("MyFruitCounter"); +```typescript +... +import { SemanticAttributes } from "@opentelemetry/semantic-conventions"; - myFruitCounter.Add(1, new("name", "apple"), new("color", "red")); - myFruitCounter.Add(2, new("name", "lemon"), new("color", "yellow")); - myFruitCounter.Add(1, new("name", "lemon"), new("color", "yellow")); - myFruitCounter.Add(2, new("name", "apple"), new("color", "green")); - myFruitCounter.Add(5, new("name", "apple"), new("color", "red")); - myFruitCounter.Add(4, new("name", "lemon"), new("color", "yellow")); +class SpanEnrichingProcessor implements SpanProcessor{ + ... - System.Console.WriteLine("Press Enter key to exit."); - System.Console.ReadLine(); + onEnd(span: ReadableSpan){ + span.attributes[SemanticAttributes.ENDUSER_ID] = "<User ID>"; } } ``` -#### [Node.js (JavaScript)](#tab/nodejs-javascript) +##### [Python](#tab/python) -```javascript - const { - MeterProvider, - PeriodicExportingMetricReader, - } = require("@opentelemetry/sdk-metrics"); - const { AzureMonitorMetricExporter } = require("@azure/monitor-opentelemetry-exporter"); +Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code: - const provider = new MeterProvider(); - const exporter = new AzureMonitorMetricExporter({ - connectionString: "<Your Connection String>", - }); - const metricReader = new PeriodicExportingMetricReader({ - exporter: exporter, - }); - provider.addMetricReader(metricReader); - const meter = provider.getMeter("OTel.AzureMonitor.Demo"); - let counter = meter.createCounter("counter"); - counter.add(1, { "testKey": "testValue" }); - counter.add(5, { "testKey2": "testValue" }); - counter.add(3, { "testKey": "testValue2" }); +```python +span._attributes["enduser.id"] = "<User ID>" ``` -#### [Node.js (TypeScript)](#tab/nodejs-typescript) --```typescript - import { - MeterProvider, - PeriodicExportingMetricReader, - PeriodicExportingMetricReaderOptions - } from "@opentelemetry/sdk-metrics"; - import { AzureMonitorMetricExporter } from "@azure/monitor-opentelemetry-exporter"; -- const provider = new MeterProvider(); - const exporter = new AzureMonitorMetricExporter({ - connectionString: - connectionString: "<Your Connection String>", - }); - const metricReaderOptions: PeriodicExportingMetricReaderOptions = { - exporter: exporter, - }; - const metricReader = new PeriodicExportingMetricReader(metricReaderOptions); - provider.addMetricReader(metricReader); - const meter = provider.getMeter("OTel.AzureMonitor.Demo"); - let counter = meter.createCounter("counter"); - counter.add(1, { "testKey": "testValue" }); - counter.add(5, { "testKey2": "testValue" }); - counter.add(3, { "testKey": "testValue2" }); -``` + -#### [Python](#tab/python) +### Add Log Attributes + +#### [.NET](#tab/net) + +Coming soon. -```python -from opentelemetry import metrics -from opentelemetry.sdk.metrics import MeterProvider -from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader +#### [Java](#tab/java) -from azure.monitor.opentelemetry.exporter import AzureMonitorMetricExporter +Logback, Log4j, and java.util.logging are [auto-instrumented](#logs). Attaching custom dimensions to your logs can be accomplished in these ways: -exporter = AzureMonitorMetricExporter(connection_string="<your-connection-string") -reader = PeriodicExportingMetricReader(exporter) -metrics.set_meter_provider(MeterProvider(metric_readers=[reader])) -meter = metrics.get_meter_provider().get_meter("otel_azure_monitor_counter_demo") +* [Logback MDC](http://logback.qos.ch/manual/mdc.html) +* [Log4j 2 MapMessage](https://logging.apache.org/log4j/2.x/log4j-api/apidocs/org/apache/logging/log4j/message/MapMessage.html) (a `MapMessage` key of `"message"` will be captured as the log message) +* [Log4j 2 Thread Context](https://logging.apache.org/log4j/2.x/manual/thread-context.html) +* [Log4j 1.2 MDC](https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/MDC.html) -counter = meter.create_counter("counter") -counter.add(1.0, {"test_key": "test_value"}) -counter.add(5.0, {"test_key2": "test_value"}) -counter.add(3.0, {"test_key": "test_value2"}) +#### [Node.js](#tab/nodejs) + +Coming soon. -input() -``` +#### [Python](#tab/python) + +Coming soon. -#### Gauge Example +### Filter telemetry -#### [.NET](#tab/net) +You might use the following ways to filter out telemetry before it leaves your application. -```csharp -using System.Diagnostics.Metrics; -using Azure.Monitor.OpenTelemetry.Exporter; -using OpenTelemetry; -using OpenTelemetry.Metrics; +#### [.NET](#tab/net) -public class Program -{ - private static readonly Meter meter = new("OTel.AzureMonitor.Demo"); +1. Many instrumentation libraries provide a filter option. For guidance, see the readme files of individual instrumentation libraries: + - [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.AspNet-1.0.0-rc9.6/src/OpenTelemetry.Instrumentation.AspNet/README.md#filter) + - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#filter) + - [HttpClient and HttpWebRequest](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.Http/README.md#filter) - public static void Main() - { - using var meterProvider = Sdk.CreateMeterProviderBuilder() - .AddMeter("OTel.AzureMonitor.Demo") - .AddAzureMonitorMetricExporter(o => +1. Use a custom processor: + + ```csharp + using var tracerProvider = Sdk.CreateTracerProviderBuilder() + .AddSource("OTel.AzureMonitor.Demo") + .AddProcessor(new ActivityFilteringProcessor()) + .AddAzureMonitorTraceExporter(o => {- o.ConnectionString = "<Your Connection String>"; + o.ConnectionString = "<Your Connection String>" }) .Build();-- var process = Process.GetCurrentProcess(); - - ObservableGauge<int> myObservableGauge = meter.CreateObservableGauge("Thread.State", () => GetThreadState(process)); -- System.Console.WriteLine("Press Enter key to exit."); - System.Console.ReadLine(); - } + ``` - private static IEnumerable<Measurement<int>> GetThreadState(Process process) + Add `ActivityFilteringProcessor.cs` to your project with the following code: + + ```csharp + using System.Diagnostics; + using OpenTelemetry; + using OpenTelemetry.Trace; + + public class ActivityFilteringProcessor : BaseProcessor<Activity> {- foreach (ProcessThread thread in process.Threads) + public override void OnStart(Activity activity) {- yield return new((int)thread.ThreadState, new("ProcessId", process.Id), new("ThreadId", thread.Id)); + // prevents all exporters from exporting internal activities + if (activity.Kind == ActivityKind.Internal) + { + activity.IsAllDataRequested = false; + } } }-} -``` --#### [Node.js (JavaScript)](#tab/nodejs-javascript) --```javascript - const { - MeterProvider, - PeriodicExportingMetricReader - } = require("@opentelemetry/sdk-metrics"); - const { AzureMonitorMetricExporter } = require("@azure/monitor-opentelemetry-exporter"); -- const provider = new MeterProvider(); - const exporter = new AzureMonitorMetricExporter({ - connectionString: - connectionString: "<Your Connection String>", - }); - const metricReader = new PeriodicExportingMetricReader({ - exporter: exporter - }); - provider.addMetricReader(metricReader); - const meter = provider.getMeter("OTel.AzureMonitor.Demo"); - let gauge = meter.createObservableGauge("gauge"); - gauge.addCallback((observableResult) => { - let randomNumber = Math.floor(Math.random() * 100); - observableResult.observe(randomNumber, {"testKey": "testValue"}); - }); -``` --#### [Node.js (TypeScript)](#tab/nodejs-typescript) --```typescript - import { - MeterProvider, - PeriodicExportingMetricReader, - PeriodicExportingMetricReaderOptions - } from "@opentelemetry/sdk-metrics"; - import { AzureMonitorMetricExporter } from "@azure/monitor-opentelemetry-exporter"; -- const provider = new MeterProvider(); - const exporter = new AzureMonitorMetricExporter({ - connectionString: "<Your Connection String>", - }); - const metricReaderOptions: PeriodicExportingMetricReaderOptions = { - exporter: exporter, - }; - const metricReader = new PeriodicExportingMetricReader(metricReaderOptions); - provider.addMetricReader(metricReader); - const meter = provider.getMeter("OTel.AzureMonitor.Demo"); - let gauge = meter.createObservableGauge("gauge"); - gauge.addCallback((observableResult: ObservableResult) => { - let randomNumber = Math.floor(Math.random() * 100); - observableResult.observe(randomNumber, {"testKey": "testValue"}); - }); -``` --#### [Python](#tab/python) + ``` -```python -from typing import Iterable +1. If a particular source isn't explicitly added by using `AddSource("ActivitySourceName")`, then none of the activities created by using that source will be exported. -from opentelemetry import metrics -from opentelemetry.metrics import CallbackOptions, Observation -from opentelemetry.sdk.metrics import MeterProvider -from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader +#### [Java](#tab/java) -from azure.monitor.opentelemetry.exporter import AzureMonitorMetricExporter +See [sampling overrides](java-standalone-config.md#sampling-overrides-preview) and [telemetry processors](java-standalone-telemetry-processors.md). -exporter = AzureMonitorMetricExporter(connection_string="<your-connection-string") -reader = PeriodicExportingMetricReader(exporter) -metrics.set_meter_provider(MeterProvider(metric_readers=[reader])) -meter = metrics.get_meter_provider().get_meter("otel_azure_monitor_gauge_demo") +#### [Node.js](#tab/nodejs) -def observable_gauge_generator(options: CallbackOptions) -> Iterable[Observation]: - yield Observation(9, {"test_key": "test_value"}) +1. Exclude the URL option provided by many HTTP instrumentation libraries. -def observable_gauge_sequence(options: CallbackOptions) -> Iterable[Observation]: - observations = [] - for i in range(10): - observations.append( - Observation(9, {"test_key": i}) - ) - return observations + The following example shows how to exclude a certain URL from being tracked by using the [HTTP/HTTPS instrumentation library](https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/opentelemetry-instrumentation-http): + + ```javascript + const { registerInstrumentations } = require( "@opentelemetry/instrumentation"); + const { HttpInstrumentation } = require( "@opentelemetry/instrumentation-http"); + const { NodeTracerProvider } = require( "@opentelemetry/sdk-trace-node"); -gauge = meter.create_observable_gauge("gauge", [observable_gauge_generator]) -gauge2 = meter.create_observable_gauge("gauge2", [observable_gauge_sequence]) + const httpInstrumentationConfig = { + ignoreIncomingRequestHook: (request) => { + // Ignore OPTIONS incoming requests + if (request.method === 'OPTIONS') { + return true; + } + return false; + }, + ignoreOutgoingRequestHook: (options) => { + // Ignore outgoing requests with /test path + if (options.path === '/test') { + return true; + } + return false; + } + }; -input() -``` + const httpInstrumentation = new HttpInstrumentation(httpInstrumentationConfig); + const provider = new NodeTracerProvider(); + provider.register(); -+ registerInstrumentations({ + instrumentations: [ + httpInstrumentation, + ] + }); + ``` -### Add Custom Exceptions +2. Use a custom processor. You can use a custom span processor to exclude certain spans from being exported. To mark spans to not be exported, set `TraceFlag` to `DEFAULT`. +Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code: -Select instrumentation libraries automatically support exceptions to Application Insights. -However, you may want to manually report exceptions beyond what instrumentation libraries report. -For instance, exceptions caught by your code aren't* ordinarily reported. You may wish to report them -to draw attention in relevant experiences including the failures section and end-to-end transaction views. + ```javascript + const { SpanKind, TraceFlags } = require("@opentelemetry/api"); -#### [.NET](#tab/net) + class SpanEnrichingProcessor { + ... -```csharp -using (var activity = activitySource.StartActivity("ExceptionExample")) -{ - try - { - throw new Exception("Test exception"); - } - catch (Exception ex) - { - activity?.SetStatus(ActivityStatusCode.Error); - activity?.RecordException(ex); + onEnd(span) { + if(span.kind == SpanKind.INTERNAL){ + span.spanContext().traceFlags = TraceFlags.NONE; + } + } }-} -``` + ``` + +#### [Python](#tab/python) -#### [Node.js (JavaScript)](#tab/nodejs-javascript) +1. Exclude the URL option provided by many HTTP instrumentation libraries. -```javascript -const { trace } = require("@opentelemetry/api"); -const { BasicTracerProvider, SimpleSpanProcessor } = require("@opentelemetry/sdk-trace-base"); -const { AzureMonitorTraceExporter } = require("@azure/monitor-opentelemetry-exporter"); + The following example shows how to exclude a certain URL from being tracked by using the [Flask](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-flask) instrumentation: + + ```python + ... + import flask + + from opentelemetry.instrumentation.flask import FlaskInstrumentor + + # You might also populate OTEL_PYTHON_FLASK_EXCLUDED_URLS env variable + # List will consist of comma delimited regexes representing which URLs to exclude + excluded_urls = "client/.*/info,healthcheck" + + FlaskInstrumentor().instrument(excluded_urls=excluded_urls) # Do this before flask.Flask + app = flask.Flask(__name__) + ... + ``` -const provider = new BasicTracerProvider(); -const exporter = new AzureMonitorTraceExporter({ - connectionString: "<Your Connection String>", -}); -provider.addSpanProcessor(new SimpleSpanProcessor(exporter)); -provider.register(); -const tracer = trace.getTracer("example-basic-tracer-node"); -let span = tracer.startSpan("hello"); -try{ - throw new Error("Test Error"); -} -catch(error){ - span.recordException(error); -} -``` +1. Use a custom processor. You can use a custom span processor to exclude certain spans from being exported. To mark spans to not be exported, set `TraceFlag` to `DEFAULT`. + + ```python + ... + from opentelemetry.sdk.trace import TracerProvider + from opentelemetry.sdk.trace.export import BatchSpanProcessor + + trace.set_tracer_provider(TracerProvider()) + span_processor = BatchSpanProcessor(exporter) + span_filter_processor = SpanFilteringProcessor() + trace.get_tracer_provider().add_span_processor(span_filter_processor) + trace.get_tracer_provider().add_span_processor(span_processor) + ... + ``` + + Add `SpanFilteringProcessor.py` to your project with the following code: + + ```python + from opentelemetry.trace import SpanContext, SpanKind, TraceFlags + from opentelemetry.sdk.trace import SpanProcessor + + class SpanFilteringProcessor(SpanProcessor): + + # prevents exporting spans from internal activities + def on_start(self, span): + if span._kind is SpanKind.INTERNAL: + span._context = SpanContext( + span.context.trace_id, + span.context.span_id, + span.context.is_remote, + TraceFlags.DEFAULT, + span.context.trace_state, + ) + + ``` -#### [Node.js (TypeScript)](#tab/nodejs-typescript) ++ +<!-- For more information, see [GitHub Repo](link). --> -```typescript -import * as opentelemetry from "@opentelemetry/api"; -import { BasicTracerProvider, SimpleSpanProcessor } from "@opentelemetry/sdk-trace-base"; -import { AzureMonitorTraceExporter } from "@azure/monitor-opentelemetry-exporter"; +### Get the trace ID or span ID + +You might want to get the trace ID or span ID. If you have logs that are sent to a different destination besides Application Insights, you might want to add the trace ID or span ID to enable better correlation when you debug and diagnose issues. -const provider = new BasicTracerProvider(); -const exporter = new AzureMonitorTraceExporter({ - connectionString: "<Your Connection String>", -}); -provider.addSpanProcessor(new SimpleSpanProcessor(exporter as any)); -provider.register(); -const tracer = opentelemetry.trace.getTracer("example-basic-tracer-node"); -let span = tracer.startSpan("hello"); -try{ - throw new Error("Test Error"); -} -catch(error){ - span.recordException(error); -} -``` +#### [.NET](#tab/net) -#### [Python](#tab/python) +Coming soon. -The OpenTelemetry Python SDK is implemented such that exceptions thrown will automatically be captured and recorded. See below for an example of this. +#### [Java](#tab/java) -```python -from opentelemetry import trace -from opentelemetry.sdk.trace import TracerProvider -from opentelemetry.sdk.trace.export import BatchSpanProcessor +You can use `opentelemetry-api` to get the trace ID or span ID. -from azure.monitor.opentelemetry.exporter import AzureMonitorTraceExporter +1. Add `opentelemetry-api-1.0.0.jar` (or later) to your application: -exporter = AzureMonitorTraceExporter(connection_string="<your-connection-string>") + ```xml + <dependency> + <groupId>io.opentelemetry.instrumentation</groupId> + <artifactId>opentelemetry-api</artifactId> + <version>1.0.0</version> + </dependency> + ``` -trace.set_tracer_provider(TracerProvider()) -tracer = trace.get_tracer("otel_azure_monitor_exception_demo") -span_processor = BatchSpanProcessor(exporter) -trace.get_tracer_provider().add_span_processor(span_processor) +1. Get the request trace ID and the span ID in your code: -# Exception events -try: - with tracer.start_as_current_span("hello") as span: - # This exception will be automatically recorded - raise Exception("Custom exception message.") -except Exception: - print("Exception raised") + ```java + import io.opentelemetry.api.trace.Span; -``` + Span span = Span.current(); + String traceId = span.getSpanContext().getTraceId(); + String spanId = span.getSpanContext().getSpanId(); + ``` -If you would like to record exceptions manually, you can disable that option when creating the span as show below. +#### [Node.js](#tab/nodejs) -```python -... -with tracer.start_as_current_span("hello", record_exception=False) as span: - try: - raise Exception("Custom exception message.") - except Exception as ex: - # Manually record exception - span.record_exception(ex) -... +Coming soon. -``` +#### [Python](#tab/python) ++Coming soon. with tracer.start_as_current_span("hello", record_exception=False) as span: You might want to enable the OpenTelemetry Protocol (OTLP) Exporter alongside your Azure Monitor Exporter to send your telemetry to two locations. > [!NOTE]-> The OTLP Exporter is shown for convenience only. We don't officially support the OTLP Exporter or any components or third-party experiences downstream of it. We suggest you open an issue with the [OpenTelemetry-Collector](https://github.com/open-telemetry/opentelemetry-collector) for OpenTelemetry issues outside the Azure support boundary. +> The OTLP Exporter is shown for convenience only. We don't officially support the OTLP Exporter or any components or third-party experiences downstream of it. #### [.NET](#tab/net) You might want to enable the OpenTelemetry Protocol (OTLP) Exporter alongside yo .Build(); ``` -#### [Node.js (JavaScript)](#tab/nodejs-javascript) +#### [Java](#tab/java) ++Coming soon. ++#### [Node.js](#tab/nodejs) 1. Install the [OpenTelemetry Collector Exporter](https://www.npmjs.com/package/@opentelemetry/exporter-otlp-http) package along with the [Azure Monitor OpenTelemetry Exporter](https://www.npmjs.com/package/@azure/monitor-opentelemetry-exporter) in your project. You might want to enable the OpenTelemetry Protocol (OTLP) Exporter alongside yo provider.register(); ``` -#### [Node.js (TypeScript)](#tab/nodejs-typescript) --1. Install the [OpenTelemetry Collector Exporter](https://www.npmjs.com/package/@opentelemetry/exporter-otlp-http) package along with the [Azure Monitor OpenTelemetry Exporter](https://www.npmjs.com/package/@azure/monitor-opentelemetry-exporter) in your project. -- ```sh - npm install @opentelemetry/exporter-otlp-http - npm install @azure/monitor-opentelemetry-exporter - ``` --2. Add the following code snippet. This example assumes you have an OpenTelemetry Collector with an OTLP receiver running. For details, see the [example on GitHub](https://github.com/open-telemetry/opentelemetry-js/tree/main/examples/otlp-exporter-node). -- ```typescript - import { BasicTracerProvider, SimpleSpanProcessor } from '@opentelemetry/sdk-trace-base'; - import { OTLPTraceExporter } from '@opentelemetry/exporter-otlp-http'; - import { AzureMonitorTraceExporter } from '@azure/monitor-opentelemetry-exporter'; -- const provider = new BasicTracerProvider(); - const azureMonitorExporter = new AzureMonitorTraceExporter({ - connectionString: "<Your Connection String>", - }); - const otlpExporter = new OTLPTraceExporter(); - provider.addSpanProcessor(new SimpleSpanProcessor(azureMonitorExporter)); - provider.addSpanProcessor(new SimpleSpanProcessor(otlpExporter)); - provider.register(); - ``` - #### [Python](#tab/python) 1. Install the [azure-monitor-opentelemetry-exporter](https://pypi.org/project/azure-monitor-opentelemetry-exporter/) and [opentelemetry-exporter-otlp](https://pypi.org/project/opentelemetry-exporter-otlp/) packages. You might want to enable the OpenTelemetry Protocol (OTLP) Exporter alongside yo ### Offline Storage and Automatic Retries -To improve reliability and resiliency, Azure Monitor OpenTelemetry-based offerings write to offline/local storage by default when an application loses its connection with Application Insights. It saves the application telemetry for 48 hours and periodically tries to send it again. In addition to exceeding the allowable time, telemetry will occasionally be dropped in high-load applications when the maximum file size is exceeded or the SDK doesn't have an opportunity to clear out the file. If we need to choose, the product will save more recent events over old ones. In some cases, you may wish to disable this feature to optimize application performance. [Learn More](data-retention-privacy.md#does-the-sdk-create-temporary-local-storage) +To improve reliability and resiliency, Azure Monitor OpenTelemetry-based offerings write to offline/local storage by default when an application loses its connection with Application Insights. It saves the application telemetry to disk and periodically tries to send it again for up to 48 hours. In addition to exceeding the allowable time, telemetry will occasionally be dropped in high-load applications when the maximum file size is exceeded or the SDK doesn't have an opportunity to clear out the file. If we need to choose, the product will save more recent events over old ones. [Learn More](data-retention-privacy.md#does-the-sdk-create-temporary-local-storage) #### [.NET](#tab/net) var tracerProvider = Sdk.CreateTracerProviderBuilder() To disable this feature, you should set `AzureMonitorExporterOptions.DisableOfflineStorage = true`. -#### [Node.js (JavaScript)](#tab/nodejs-javascript) --By default, the AzureMonitorExporter uses one of the following locations for offline storage. --- Windows- - %TEMP%\Microsoft\AzureMonitor -- Non-Windows- - %TMPDIR%/Microsoft/AzureMonitor - - /var/tmp/Microsoft/AzureMonitor --To override the default directory, you should set `storageDirectory`. +#### [Java](#tab/java) -For example: -```javascript -const exporter = new AzureMonitorTraceExporter({ - connectionString: "<Your Connection String>", - storageDirectory: "C:\\SomeDirectory", - disableOfflineStorage: false -}); -``` +Configuring Offline Storage and Automatic Retries is not available in Java. -To disable this feature, you should set `disableOfflineStorage = true`. +For a full list of available configurations, see [Configuration options](./java-standalone-config.md). -#### [Node.js (TypeScript)](#tab/nodejs-typescript) +#### [Node.js](#tab/nodejs) By default, the AzureMonitorExporter uses one of the following locations for offline storage. This section provides help with troubleshooting. The Azure Monitor Exporter uses EventSource for its own internal logging. The exporter logs are available to any EventListener by opting into the source named OpenTelemetry-AzureMonitor-Exporter. For troubleshooting steps, see [OpenTelemetry Troubleshooting](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/src/OpenTelemetry#troubleshooting). -#### [Node.js (JavaScript)](#tab/nodejs-javascript) +#### [Java](#tab/java) ++Diagnostic logging is enabled by default. For more information, see the dedicated [troubleshooting article](java-standalone-troubleshoot.md). ++#### [Node.js](#tab/nodejs) Azure Monitor Exporter uses the OpenTelemetry API Logger for internal logs. To enable it, use the following code: diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.ALL); provider.register(); ``` -#### [Node.js (TypeScript)](#tab/nodejs-typescript) --Azure Monitor Exporter uses the OpenTelemetry API Logger for internal logs. To enable it, use the following code: --```typescript -import { diag, DiagConsoleLogger, DiagLogLevel } from "@opentelemetry/api"; -import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node"; --const provider = new NodeTracerProvider(); -diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.ALL); -provider.register(); -``` - #### [Python](#tab/python) The Azure Monitor Exporter uses the Python standard logging [library](https://docs.python.org/3/library/logging.html) for its own internal logging. OpenTelemetry API and Azure Monitor Exporter logs are logged at the severity level of WARNING or ERROR for irregular activity. The INFO severity level is used for regular or successful activity. By default, the Python logging library sets the severity level to WARNING, so you must change the severity level to see logs under this severity setting. The following example shows how to output logs of *all* severity levels to the console *and* a file: logger.addHandler(stream) Known issues for the Azure Monitor OpenTelemetry Exporters include: +#### [.NET](#tab/net) ++- Operation name is missing on dependency telemetry, which adversely affects failures and performance tab experience. +- Device model is missing on request and dependency telemetry, which adversely affects device cohort analysis. +- Database server name is left out of dependency name, which incorrectly aggregates tables with the same name on different servers. ++#### [Java](#tab/java) ++No known issues. ++#### [Node.js](#tab/nodejs) ++- Operation name is missing on dependency telemetry, which adversely affects failures and performance tab experience. +- Device model is missing on request and dependency telemetry, which adversely affects device cohort analysis. +- Database server name is left out of dependency name, which incorrectly aggregates tables with the same name on different servers. ++#### [Python](#tab/python) + - Operation name is missing on dependency telemetry, which adversely affects failures and performance tab experience. - Device model is missing on request and dependency telemetry, which adversely affects device cohort analysis. - Database server name is left out of dependency name, which incorrectly aggregates tables with the same name on different servers. ++ [!INCLUDE [azure-monitor-app-insights-test-connectivity](../../../includes/azure-monitor-app-insights-test-connectivity.md)] ## Support To get support: ### [.NET](#tab/net) -For OpenTelemetry issues, contact the [OpenTelemetry .NET community](https://github.com/open-telemetry/opentelemetry-dotnet) directly. +- For OpenTelemetry issues, contact the [OpenTelemetry .NET community](https://github.com/open-telemetry/opentelemetry-dotnet) directly. +- For a list of open issues related to Azure Monitor Exporter, see the [GitHub Issues Page](https://github.com/Azure/azure-sdk-for-net/issues?q=is%3Aopen+is%3Aissue+label%3A%22Monitor+-+Exporter%22). -### [Node.js (JavaScript)](#tab/nodejs-javascript) +### [Java](#tab/java) -For OpenTelemetry issues, contact the [OpenTelemetry JavaScript community](https://github.com/open-telemetry/opentelemetry-js) directly. +- For help with troubleshooting, review the [troubleshooting steps](java-standalone-troubleshoot.md). +- For Azure support issues, open an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/). +- For OpenTelemetry issues, contact the [OpenTelemetry community](https://opentelemetry.io/community/) directly. +- For a list of open issues related to Azure Monitor Java Auto-Instrumentation, see the [GitHub Issues Page](https://github.com/microsoft/ApplicationInsights-Java/issues). -### [Node.js (TypeScript)](#tab/nodejs-typescript) +### [Node.js](#tab/nodejs) -For OpenTelemetry issues, contact the [OpenTelemetry JavaScript community](https://github.com/open-telemetry/opentelemetry-js) directly. +- For OpenTelemetry issues, contact the [OpenTelemetry JavaScript community](https://github.com/open-telemetry/opentelemetry-js) directly. +- For a list of open issues related to Azure Monitor Exporter, see the [GitHub Issues Page](https://github.com/Azure/azure-sdk-for-js/issues?q=is%3Aopen+is%3Aissue+label%3A%22Monitor+-+Exporter%22). ### [Python](#tab/python) -For OpenTelemetry issues, contact the [OpenTelemetry Python community](https://github.com/open-telemetry/opentelemetry-python) directly. +- For OpenTelemetry issues, contact the [OpenTelemetry Python community](https://github.com/open-telemetry/opentelemetry-python) directly. +- For a list of open issues related to Azure Monitor Exporter, see the [GitHub Issues Page](https://github.com/Azure/azure-sdk-for-python/issues?q=is%3Aopen+is%3Aissue+label%3A%22Monitor+-+Exporter%22). To provide feedback: - To learn more about OpenTelemetry and its community, see the [OpenTelemetry .NET GitHub repository](https://github.com/open-telemetry/opentelemetry-dotnet). - To enable usage experiences, [enable web or browser user monitoring](javascript.md). -### [Node.js (JavaScript)](#tab/nodejs-javascript) +### [Java](#tab/java) -- To review the source code, see the [Azure Monitor Exporter GitHub repository](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/monitor/monitor-opentelemetry-exporter).-- To install the npm package, check for updates, or view release notes, see the [Azure Monitor Exporter npm Package](https://www.npmjs.com/package/@azure/monitor-opentelemetry-exporter) page.-- To become more familiar with Azure Monitor Application Insights and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/monitor/monitor-opentelemetry-exporter/samples).-- To learn more about OpenTelemetry and its community, see the [OpenTelemetry JavaScript GitHub repository](https://github.com/open-telemetry/opentelemetry-js).-- To enable usage experiences, [enable web or browser user monitoring](javascript.md).+- Review [Java auto-instrumentation configuration options](java-standalone-config.md). +- To review the source code, see the [Azure Monitor Java auto-instrumentation GitHub repository](https://github.com/Microsoft/ApplicationInsights-Java). +- To learn more about OpenTelemetry and its community, see the [OpenTelemetry Java GitHub repository](https://github.com/open-telemetry/opentelemetry-java-instrumentation). +- To enable usage experiences, see [Enable web or browser user monitoring](javascript.md). +- See the [release notes](https://github.com/microsoft/ApplicationInsights-Java/releases) on GitHub. -### [Node.js (TypeScript)](#tab/nodejs-typescript) +### [Node.js](#tab/nodejs) - To review the source code, see the [Azure Monitor Exporter GitHub repository](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/monitor/monitor-opentelemetry-exporter). - To install the npm package, check for updates, or view release notes, see the [Azure Monitor Exporter npm Package](https://www.npmjs.com/package/@azure/monitor-opentelemetry-exporter) page. |
azure-monitor | Opentelemetry Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-overview.md | Manual instrumentation is coding against the OpenTelemetry API. In the context o > > A subset of OpenTelemetry instrumentation libraries will be supported by Azure Monitor, informed by customer feedback. We're also working to [instrument the most popular Azure Service SDKs using OpenTelemetry](https://devblogs.microsoft.com/azure-sdk/introducing-experimental-opentelemetry-support-in-the-azure-sdk-for-net/). -Auto-instrumentation enables telemetry collection through configuration without touching the application's code. Although it's more convenient, it tends to be less configurable. It's also not available in all languages. The Azure Monitor OpenTelemetry-based auto-instrumentation offering consists of the [Java 3.X OpenTelemetry-based GA offering](java-in-process-agent.md). We continue to invest in it informed by customer feedback. The OpenTelemetry community is also experimenting with C# and Python auto-instrumentation, but Azure Monitor is focused on creating a simple and effective manual instrumentation story in the near term. +Auto-instrumentation enables telemetry collection through configuration without touching the application's code. Although it's more convenient, it tends to be less configurable. It's also not available in all languages. The Azure Monitor OpenTelemetry-based auto-instrumentation offering consists of the [Java 3.X OpenTelemetry-based GA offering](opentelemetry-enable.md?tabs=java). We continue to invest in it informed by customer feedback. The OpenTelemetry community is also experimenting with C# and Python auto-instrumentation, but Azure Monitor is focused on creating a simple and effective manual instrumentation story in the near term. ### Send your telemetry Traces | Logs The following websites consist of language-by-language guidance to enable and configure Microsoft's OpenTelemetry-based offerings. The available functionality and limitations of each offering are explained so that you can determine whether OpenTelemetry is right for your project. - [.NET](opentelemetry-enable.md)-- [Java](java-in-process-agent.md)+- [Java](opentelemetry-enable.md?tabs=java) - [JavaScript](opentelemetry-enable.md) - [Python](opentelemetry-enable.md) |
azure-monitor | Pre Aggregated Metrics Log Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/pre-aggregated-metrics-log-metrics.md | The collection endpoint pre-aggregates events before ingestion sampling. For thi |-|--|-|--| | ASP.NET | Supported <sup>1<sup> | Not supported | Not supported | | ASP.NET Core | Supported <sup>2<sup> | Not supported | Not supported |-| Java | Not supported | Not supported | [Supported](java-in-process-agent.md#metrics) | +| Java | Not supported | Not supported | [Supported](opentelemetry-enable.md?tabs=java#metrics) | | Node.js | Not supported | Not supported | Not supported | 1. ASP.NET codeless attach on virtual machines/virtual machine scale sets and on-premises emits standard metrics without dimensions. The same is true for Azure App Service, but the collection level must be set to recommended. The SDK is required for all dimensions. |
azure-monitor | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/release-notes.md | Get started with code-based monitoring: * [ASP.NET](./asp-net.md) * [ASP.NET Core](./asp-net-core.md)-* [Java](./java-in-process-agent.md) +* [Java](./opentelemetry-enable.md?tabs=java) * [Node.js](./nodejs.md) * [Python](./opencensus-python.md) |
azure-monitor | Sdk Connection String | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md | Get started at development time with: * [ASP.NET](./asp-net.md) * [ASP.NET Core](./asp-net-core.md)-* [Java](./java-in-process-agent.md) +* [Java](./opentelemetry-enable.md?tabs=java) * [Node.js](./nodejs.md) * [Python](./opencensus-python.md) |
azure-monitor | Usage Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-overview.md | Which features of your web or mobile app are most popular? Do your users achieve The best experience is obtained by installing Application Insights both in your app server code and in your webpages. The client and server components of your app send telemetry back to the Azure portal for analysis. -1. **Server code:** Install the appropriate module for your [ASP.NET](./asp-net.md), [Azure](./app-insights-overview.md), [Java](./java-in-process-agent.md), [Node.js](./nodejs.md), or [other](./app-insights-overview.md#supported-languages) app. +1. **Server code:** Install the appropriate module for your [ASP.NET](./asp-net.md), [Azure](./app-insights-overview.md), [Java](./opentelemetry-enable.md?tabs=java), [Node.js](./nodejs.md), or [other](./app-insights-overview.md#supported-languages) app. * If you don't want to install server code, [create an Application Insights resource](./create-new-resource.md). |
azure-monitor | Best Practices Data Collection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-data-collection.md | To enable monitoring for an application, you must decide whether you'll use code **Codeless monitoring** is easiest to implement and can be configured after your code development. It doesn't require any updates to your code. For information on how to enable monitoring based on your application, see: - [Applications hosted on Azure Web Apps](app/azure-web-apps.md)-- [Java applications](app/java-in-process-agent.md)+- [Java applications](app/opentelemetry-enable.md?tabs=java) - [ASP.NET applications hosted in IIS on Azure Virtual Machines or Azure Virtual Machine Scale Sets](app/azure-vm-vmss-apps.md) - [ASP.NET applications hosted in IIS on-premises](app/status-monitor-v2-overview.md) To enable monitoring for an application, you must decide whether you'll use code - [ASP.NET applications](app/asp-net.md) - [ASP.NET Core applications](app/asp-net-core.md) - [.NET console applications](app/console.md)-- [Java](app/java-in-process-agent.md)+- [Java](app/opentelemetry-enable.md?tabs=java) - [Node.js](app/nodejs.md) - [Python](app/opencensus-python.md) - [Other platforms](app/app-insights-overview.md#supported-languages) |
azure-monitor | Data Collection Rule Edit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-edit.md | In order to update DCR, we are going to retrieve its content and save it as a fi 2. Execute the following commands to retrieve DCR content and save it to a file. Replace `<ResourceId>` with DCR ResourceID and `<FilePath>` with the name of the file to store DCR. ```PowerShell- $ResourceId = ΓÇ£<ResourceId>ΓÇ¥ # Resource ID of the DCR to edit - $FilePath = ΓÇ£<FilePath>ΓÇ¥ # Store DCR content in this file + $ResourceId = "<ResourceId>" # Resource ID of the DCR to edit + $FilePath = "<FilePath>" # Store DCR content in this file $DCR = Invoke-AzRestMethod -Path ("$ResourceId"+"?api-version=2021-09-01-preview") -Method GET $DCR.Content | ConvertFrom-Json | ConvertTo-Json -Depth 20 | Out-File -FilePath $FilePath ``` |
azure-monitor | Data Collection Transformations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-transformations.md | The following table describes the different goals that transformations can be us | Category | Details | |:|:|-| Remove sensitive data | You may have a data source that sends information you don't want stored for privacy or compliancy reasons.<br><br>**Filter sensitive information.** Filter out entire rows or just particular columns that contain sensitive information.<br><br>**Obfuscate sensitive information**. For example, you might replace digits with a common character in an IP address or telephone number. | +| Remove sensitive data | You may have a data source that sends information you don't want stored for privacy or compliancy reasons.<br><br>**Filter sensitive information.** Filter out entire rows or just particular columns that contain sensitive information.<br><br>**Obfuscate sensitive information**. Replace information such as digits in an IP address or telephone number with a common character.<br><br>**Send to alternate table.** Send sensitive records to an alternate table with different RBAC configuration. | | Enrich data with additional or calculated information | Use a transformation to add information to data that provides business context or simplifies querying the data later.<br><br>**Add a column with additional information.** For example, you might add a column identifying whether an IP address in another column is internal or external.<br><br>**Add business specific information.** For example, you might add a column indicating a company division based on location information in other columns. |-| Reduce data costs | Since you're charged ingestion cost for any data sent to a Log Analytics workspace, you want to filter out any data that you don't require to reduce your costs.<br><br>**Remove entire rows.** For example, you might have a diagnostic setting to collect resource logs from a particular resource but not require all of the log entries that it generates. Create a transformation that filters out records that match a certain criteria.<br><br>**Remove a column from each row.** For example, your data may include columns with data that's redundant or has minimal value. Create a transformation that filters out columns that aren't required.<br><br>**Parse important data from a column.** You may have a table with valuable data buried in a particular column. Use a transformation to parse the valuable data into a new column and remove the original. | +| Reduce data costs | Since you're charged ingestion cost for any data sent to a Log Analytics workspace, you want to filter out any data that you don't require to reduce your costs.<br><br>**Remove entire rows.** For example, you might have a diagnostic setting to collect resource logs from a particular resource but not require all of the log entries that it generates. Create a transformation that filters out records that match a certain criteria.<br><br>**Remove a column from each row.** For example, your data may include columns with data that's redundant or has minimal value. Create a transformation that filters out columns that aren't required.<br><br>**Parse important data from a column.** You may have a table with valuable data buried in a particular column. Use a transformation to parse the valuable data into a new column and remove the original.<br><br>**Send certain rows to basic logs.** Send rows in your data that require on basic query capabilities to basic logs tables for a lower ingestion cost. | A common use of the workspace transformation DCR is collection of [resource logs :::image type="content" source="media/data-collection-transformations/transformation-diagnostic-settings.png" lightbox="media/data-collection-transformations/transformation-diagnostic-settings.png" alt-text="Diagram of workspace transformation for resource logs configured with diagnostic settings." border="false"::: +## Multiple destinations ++Transformations allow you to send data to multiple destinations in a Log Analytics workspace using a single DCR. You provide a KQL query for each destination, and the results of each query are sent to their corresponding location. You can send different sets of data to different tables, or use multiple queries to send different sets of data to the same table. ++For example, you may send event data into Azure Monitor using the Logs ingestion API. Most of the events should be sent an analytics table where it could be queried regularly, while audit events should be sent to a custom table configured for [basic logs](../logs/basic-logs-configure.md) to reduce your cost. ++To use multiple destinations, you must currently either manually create a new DCR or [edit an existing one](data-collection-rule-edit.md). See the [Samples](#samples) section for examples of DCRs using multiple destinations. ++> [!IMPORTANT] +> Currently, the tables in the DCR must be in the same Log Analytics workspace. To send to multiple workspaces from a single data source, use multiple DCRs and configure your application to send the data to each. ++++ ## Creating a transformation There are multiple methods to create transformations depending on the data collection method. The following table lists guidance for different methods for creating transformations. | Type | Reference | |:|:| | Logs ingestion API with transformation | [Send data to Azure Monitor Logs using REST API (Azure portal)](../logs/tutorial-logs-ingestion-portal.md)<br>[Send data to Azure Monitor Logs using REST API (Resource Manager templates)](../logs/tutorial-logs-ingestion-api.md) |-| Transformation in workspace DCR | [Add workspace transformation to Azure Monitor Logs using the Azure portal](../logs/tutorial-workspace-transformations-portal.md)<br>[Add workspace transformation to Azure Monitor Logs using resource manager templates](../logs/tutorial-workspace-transformations-api.md) +| Transformation in workspace DCR | [Add workspace transformation to Azure Monitor Logs using the Azure portal](../logs/tutorial-workspace-transformations-portal.md)<br>[Add workspace transformation to Azure Monitor Logs using Resource Manager templates](../logs/tutorial-workspace-transformations-api.md) ## Cost for transformations There is no direct cost for transformations, but you may incur charges for the following: There is no direct cost for transformations, but you may incur charges for the f - If your transformation increases the size of the incoming data, adding a calculated column for example, then you're charged at the normal rate for ingestion of that additional data. - If your transformation reduces the incoming data by more than 50%, then you're charged for ingestion of the amount of filtered data above 50%. + The formula to determine the filter ingestion charge from transformations is `[GB filtered out by transformations] - ( [Total GB ingested] / 2 )`. For example, suppose that you ingest 100 GB on a particular day, and transformations remove 70 GB. You would be charged for 70 GB - (100 GB / 2) or 20 GB. To avoid this charge, you should use other methods to filter incoming data before the transformation is applied. See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor) for current charges for ingestion and retention of log data in Azure Monitor. See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor) > If Azure Sentinel is enabled for the Log Analytics workspace, then there is no filtering ingestion charge regardless of how much data the transformation filters. +## Samples +Following are Resource Manager templates of sample DCRs with different patterns. You can use these templates as a starting point to creating DCRs with transformations for your own scenarios. ++### Single destination ++The following example is a DCR for Azure Monitor agent that sends data to the `Syslog` table. In this example, the transformation filters the data for records with *error* in the message. +++```json +{ + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "resources" : [ + { + "type": "Microsoft.Insights/dataCollectionRules", + "name": "singleDestinationDCR", + "apiVersion": "2021-09-01-preview", + "location": "eastus", + "properties": { + "dataSources": { + "syslog": [ + { + "name": "sysLogsDataSource", + "streams": [ + "Microsoft-Syslog" + ], + "facilityNames": [ + "auth", + "authpriv", + "cron", + "daemon", + "mark", + "kern", + "mail", + "news", + "syslog", + "user", + "uucp" + ], + "logLevels": [ + "Debug", + "Critical", + "Emergency" + ] + } + ] + }, + "destinations": { + "logAnalytics": [ + { + "workspaceResourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace", + "name": "centralWorkspace" + } + ] + }, + "dataFlows": [ + { + "streams": [ + "Microsoft-Syslog" + ], + "transformKql": "source | where message contains 'error'", + "destinations": [ + "centralWorkspace" + ] + } + ] + } + } + ] +} +``` ++### Multiple Azure tables ++The following example is a DCR for data from Logs Ingestion API that sends data to both the `Syslog` and `SecurityEvent` table. This requires a separate `dataFlow` for each with a different `transformKql` and `OutputStream` for each. In this example, all incoming data is sent to the `Syslog` table while malicious data is also sent to the `SecurityEvent` table. If you didn't want to replicate the malicious data in both tables, you could add a `where` statement to first query to remove those records. ++```json +{ + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "resources" : [ + { + "type": "Microsoft.Insights/dataCollectionRules", + "name": "multiDestinationDCR", + "location": "eastus", + "apiVersion": "2021-09-01-preview", + "properties": { + "dataCollectionEndpointId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/my-resource-group/providers//Microsoft.Insights/dataCollectionEndpoints/my-dce", + "streamDeclarations": { + "Custom-MyTableRawData": { + "columns": [ + { + "name": "Time", + "type": "datetime" + }, + { + "name": "Computer", + "type": "string" + }, + { + "name": "AdditionalContext", + "type": "string" + } + ] + } + }, + "destinations": { + "logAnalytics": [ + { + "workspaceResourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace", + "name": "clv2ws1" + }, + ] + }, + "dataFlows": [ + { + "streams": [ + "Custom-MyTableRawData" + ], + "destinations": [ + "clv2ws1" + ], + "transformKql": "source | project TimeGenerated = Time, Computer, Message = AdditionalContext", + "outputStream": "Microsoft-Syslog" + }, + { + "streams": [ + "Custom-MyTableRawData" + ], + "destinations": [ + "clv2ws1" + ], + "transformKql": "source | where (AdditionalContext contains 'malicious traffic!' | project TimeGenerated = Time, Computer, Subject = AdditionalContext", + "outputStream": "Microsoft-SecurityEvent" + } + ] + } + } + ] +} +``` ++### Combination of Azure and custom tables ++The following example is a DCR for data from Logs Ingestion API that sends data to both the `Syslog` table and a custom table with the data in a different format. This requires a separate `dataFlow` for each with a different `transformKql` and `OutputStream` for each. +++```json +{ + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "resources" : [ + { + "type": "Microsoft.Insights/dataCollectionRules", + "name": "multiDestinationDCR", + "location": "eastus", + "apiVersion": "2021-09-01-preview", + "properties": { + "dataCollectionEndpointId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/my-resource-group/providers//Microsoft.Insights/dataCollectionEndpoints/my-dce", + "streamDeclarations": { + "Custom-MyTableRawData": { + "columns": [ + { + "name": "Time", + "type": "datetime" + }, + { + "name": "Computer", + "type": "string" + }, + { + "name": "AdditionalContext", + "type": "string" + } + ] + } + }, + "destinations": { + "logAnalytics": [ + { + "workspaceResourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace", + "name": "clv2ws1" + }, + ] + }, + "dataFlows": [ + { + "streams": [ + "Custom-MyTableRawData" + ], + "destinations": [ + "clv2ws1" + ], + "transformKql": "source | project TimeGenerated = Time, Computer, SyslogMessage = AdditionalContext", + "outputStream": "Microsoft-Syslog" + }, + { + "streams": [ + "Custom-MyTableRawData" + ], + "destinations": [ + "clv2ws1" + ], + "transformKql": "source | extend jsonContext = parse_json(AdditionalContext) | project TimeGenerated = Time, Computer, AdditionalContext = jsonContext, ExtendedColumn=tostring(jsonContext.CounterName)", + "outputStream": "Custom-MyTable_CL" + } + ] + } + } + ] +} +``` +++ ## Next steps - [Create a data collection rule](../agents/data-collection-rule-azure-monitor-agent.md) and an association to it from a virtual machine using the Azure Monitor agent. |
azure-monitor | Log Powerbi | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-powerbi.md | Title: Log Analytics integration with Power BI and Excel -description: Learn how to send results from Log Analytics to Power BI. -+description: Learn how to send results from a query in Log Analytics to Power BI. + Previously updated : 06/22/2022 Last updated : 02/06/2023 -# Log Analytics integration with Power BI +# Integrate Log Analytics with Power BI -This article focuses on ways to feed data from Log Analytics into Power BI to create more visually appealing reports and dashboards. +[Azure Monitor Logs](../logs/data-platform-logs.md) provides an end-to-end solution for ingesting logs. From [Log Analytics](../data-platform.md), Azure Monitor's user interface for querying logs, you can connect log data to Microsoft's [Power BI](https://powerbi.microsoft.com/) data visualization platform. -## Background +This article explains how to feed data from Log Analytics into Power BI to produce reports and dashboards based on log data. -Azure Monitor Logs is a platform that provides an end-to-end solution for ingesting logs. [Azure Monitor Log Analytics](../data-platform.md) is the interface to query these logs. For more information on the entire Azure Monitor data platform including Log Analytics, see [Azure Monitor data platform](../data-platform.md). +> [!NOTE] +> You can use free Power BI features to integrate and create reports and dashboards. More advanced features, such as sharing your work, scheduled refreshes, dataflows, and incremental refresh might require purchasing a Power BI Pro or Premium account. For more information, see [Learn more about Power BI pricing and features](https://powerbi.microsoft.com/pricing/). -Power BI is the Microsoft data visualization platform. For more information on how to get started, see the [Power BI home page](https://powerbi.microsoft.com/). +## Create Power BI datasets and reports from Log Analytics queries -In general, you can use free Power BI features to integrate and create visually appealing reports and dashboards. +From the **Export** menu in Log Analytics, select one of the two options for creating Power BI datasets and reports from your Log Analytics queries: -More advanced features might require purchasing a Power BI Pro or Premium account. These features include: ---For more information, see [Learn more about Power BI pricing and features](https://powerbi.microsoft.com/pricing/). --## Integrate queries --Power BI uses the [M query language](/powerquery-m/power-query-m-language-specification/) as its main querying language. --Log Analytics queries can be exported to M and used in Power BI directly. After you run a successful query, select **Export to Power BI (M query)** from the **Export** dropdown list in the Log Analytics top toolbar. ---Log Analytics creates a .txt file containing the M code that can be used directly in Power BI. --## Connect your logs to a dataset --A Power BI dataset is a source of data ready for reporting and visualization. To connect a Log Analytics query to a dataset, copy the M code exported from Log Analytics into a blank query in Power BI. --For more information, see [Understanding Power BI datasets](/power-bi/service-datasets-understand/). + +- **Power BI (as an M query)**: This option exports the query (together with the connection string for the query) to a .txt file that you can use in Power BI Desktop. Use this option if you need to model or transform the data in ways that aren't available in the Power BI service. Otherwise, consider exporting the query as a new dataset. +- **Power BI (new Dataset)**: This option creates a new dataset based on your query directly in the Power BI service. After the dataset has been created, you can create reports, use Analyze in Excel, share it with others, and use other Power BI features. For more information, see [Create a Power BI dataset directly from Log Analytics](/power-bi/connect-data/create-dataset-log-analytics). ## Collect data with Power BI dataflows -Power BI dataflows also allow you to collect and store data. For more information, see [Power BI dataflows](/power-bi/service-dataflows-overview). --A dataflow is a type of "cloud ETL" designed to help you collect and prep your data. A dataset is the "model" designed to help you connect different entities and model them for your needs. +[Power BI dataflows](/power-bi/service-dataflows-overview) also allow you to collect and store data. A dataflow is a type of cloud ETL (extract, transform, and load) process that helps you collect and prepare your data. A dataset is the "model" designed to help you connect different entities and model them for your needs. ## Incremental refresh After your data is sent to Power BI, you can continue to use Power BI to create For more information, see [Create and share your first Power BI report](/training/modules/build-your-first-power-bi-report/). -## Excel integration --You can use the same M integration used in Power BI to integrate with an Excel spreadsheet. For more information, see [Import data from data sources (Power Query)](https://support.microsoft.com/office/import-data-from-external-data-sources-power-query-be4330b3-5356-486c-a168-b68e9e616f5a). Then paste the M query exported from Log Analytics. --For more information, see [Integrate Log Analytics and Excel](log-excel.md). - ## Next steps -Get started with [Log Analytics queries](./log-query-overview.md). +Learn how to: +- [Get started with Log Analytics queries](./log-query-overview.md). +- [Integrate Log Analytics and Excel](log-excel.md). |
azure-monitor | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md | Visualizations|[Azure Workbooks](./visualize/workbooks-overview.md)|New video to | Article | Description | |||-|[Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](./app/java-in-process-agent.md)|New OpenTelemetry `@WithSpan` annotation guidance.| +|[Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](./app/opentelemetry-enable.md?tabs=java)|New OpenTelemetry `@WithSpan` annotation guidance.| |[Capture Application Insights custom metrics with .NET and .NET Core](./app/tutorial-asp-net-custom-metrics.md)|Tutorial steps and images have been updated.|-|[Configuration options - Azure Monitor Application Insights for Java](./app/java-in-process-agent.md)|Connection string guidance updated.| +|[Configuration options - Azure Monitor Application Insights for Java](./app/opentelemetry-enable.md)|Connection string guidance updated.| |[Enable Application Insights for ASP.NET Core applications](./app/tutorial-asp-net-core.md)|Tutorial steps and images have been updated.| |[Enable Azure Monitor OpenTelemetry Exporter for .NET, Node.js, and Python applications (preview)](./app/opentelemetry-enable.md)|Our product feedback link at the bottom of each document has been fixed.| |[Filter and preprocess telemetry in the Application Insights SDK](./app/api-filtering-sampling.md)|Added sample initializer to control which client IP gets used as part of geo-location mapping.| Visualizations|[Azure Workbooks](./visualize/workbooks-overview.md)|New video to | Article | Description | |:|:|-|[Azure Monitor Application Insights Java](app/java-in-process-agent.md)|OpenTelemetry-based auto-instrumentation for Java applications has an updated Supported Custom Telemetry table. +|[Azure Monitor Application Insights Java](app/opentelemetry-enable.md?tabs=java)|OpenTelemetry-based auto-instrumentation for Java applications has an updated Supported Custom Telemetry table. |[Application Insights API for custom events and metrics](app/api-custom-events-metrics.md)|Clarification has been added that valueCount and itemCount have a minimum value of 1. |[Telemetry sampling in Azure Application Insights](app/sampling.md)|Sampling documentation has been updated to warn of the potential impact on alerting accuracy. |[Azure Monitor Application Insights Java (redirect to OpenTelemetry)](app/java-in-process-agent-redirect.md)|Java Auto-Instrumentation now redirects to OpenTelemetry documentation. |[Azure Application Insights for ASP.NET Core applications](app/asp-net-core.md)|Updated .NET Core FAQ |[Create a new Azure Monitor Application Insights workspace-based resource](app/create-workspace-resource.md)|We've linked out to Microsoft Insights components for more information on Properties. |[Application Insights SDK support guidance](app/sdk-support-guidance.md)|SDK support guidance has been updated and clarified.-|[Azure Monitor Application Insights Java](app/java-in-process-agent.md)|Example code has been updated. +|[Azure Monitor Application Insights Java](app/opentelemetry-enable.md?tabs=java)|Example code has been updated. |[IP addresses used by Azure Monitor](app/ip-addresses.md)|The IP/FQDN table has been updated. |[Continuous export of telemetry from Application Insights](app/export-telemetry.md)|The continuous export notice has been updated and clarified. |[Set up availability alerts with Application Insights](app/availability-alerts.md)|Custom Alert Rule and Alert Frequency sections have been added. Visualizations|[Azure Workbooks](./visualize/workbooks-overview.md)|New video to | [Application Insights logging with .NET](app/ilogger.md) | Connection string sample code has been added.| | [Application Insights SDK support guidance](app/sdk-support-guidance.md) | Updated SDK supportability guidance. | | [Azure AD authentication for Application Insights](app/azure-ad-authentication.md) | Azure AD authenticated telemetry ingestion has been reached general availability.|-| [Azure Application Insights for JavaScript web apps](app/javascript.md) | Our Java on-premises page has been retired and redirected to [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](app/java-in-process-agent.md).| +| [Azure Application Insights for JavaScript web apps](app/javascript.md) | Our Java on-premises page has been retired and redirected to [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](app/opentelemetry-enable.md?tabs=java).| | [Azure Application Insights Telemetry Data Model - Telemetry Context](app/data-model-context.md) | Clarified that Anonymous User ID is simply User.Id for easy selection in Intellisense.| | [Continuous export of telemetry from Application Insights](app/export-telemetry.md) | On February 29, 2024, continuous export will be deprecated as part of the classic Application Insights deprecation.| | [Dependency Tracking in Azure Application Insights](app/asp-net-dependencies.md) | The Event Hubs Client SDK and ServiceBus Client SDK information has been updated.| |
azure-netapp-files | Performance Benchmarks Azure Vmware Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-benchmarks-azure-vmware-solution.md | + + Title: Azure NetApp Files datastore performance benchmarks for Azure VMware Solution | Microsoft Docs +description: Describes performance benchmarks that Azure NetApp Files datastores deliver for virtual machines on Azure VMware Solution. ++documentationcenter: '' +++editor: '' ++ms.assetid: +++ na + Last updated : 02/07/2023+++# Azure NetApp Files datastore performance benchmarks for Azure VMware Solution ++This article describes performance benchmarks that Azure NetApp Files datastores deliver for virtual machines on Azure VMware Solution (AVS). ++The tested scenarios are as follows: +* One-to-multiple virtual machines running on a single AVS host and a single Azure NetApp Files datastore +* One-to-multiple Azure NetApp Files datastores with a single AVS host +* Scale-out Azure NetApp Files datastores with multiple AVS hosts ++The following `read:write` I/O ratios were tested for each scenario: `100:0, 75:25, 50:50, 25:75, 0:100` ++Benchmarks documented in this article were performed with sufficient volume throughput to prevent soft limits from affecting performance. Benchmarks can be achieved with Azure NetApp Files Premium and Ultra service levels, and in some cases with Standard service level. For more information on volume throughput, see [Performance considerations for Azure NetApp Files](azure-netapp-files-performance-considerations.md). ++## Environment details ++The results in this article were achieved using the following environment configuration: ++* Azure VMware Solution host size: AV36 +* Azure VMware Solution private cloud connectivity: UltraPerformance gateway with FastPath +* Guest virtual machine(s): Ubuntu 21.04, 16 vCPU, 64 GB Memory +* Workload generator: `fio` ++## Latency ++Traffic latency from AVS to Azure NetApp Files datastores varies from sub-millisecond (for environments under minimal load) up to 2-3 milliseconds (for environments under medium to heavy load). The latency is potentially higher for environments that attempt to push beyond the throughput limits of various components. Latency and throughput may vary depending on several factors, including I/O size, read/write ratios, competing network traffic, and so on. ++## One-to-multiple virtual machines running on a single AVS host and a single Azure NetApp Files datastore ++In a single AVS host scenario, the AVS to Azure NetApp Files datastore I/O occurs over a single network flow. The following graphs compare the throughput and IOPs of a single virtual machine with the aggregated throughput and IOPs of four virtual machines. In the subsequent scenarios, the number of network flows increases as more hosts and datastores are added. +++## One-to-multiple Azure NetApp Files datastores with a single AVS host ++The following graphs compare the throughput of a single virtual machine on a single Azure NetApp Files datastore with the aggregated throughput of four Azure NetApp Files datastores. In both scenarios, each virtual machine has a VMDK on each Azure NetApp Files datastore. +++The following graphs compare the IOPs of a single virtual machine on a single Azure NetApp Files datastore with the aggregated IOPs of eight Azure NetApp Files datastores. In both scenarios, each virtual machine has a VMDK on each Azure NetApp Files datastore. +++## Scale-out Azure NetApp Files datastores with multiple AVS hosts ++The following graph shows the aggregated throughput and IOPs of 16 virtual machines distributed across four AVS hosts. There are four virtual machines per AVS host, each on a different Azure NetApp Files datastore. ++Nearly identical results were achieved with a single virtual machine on each host with four VMDKs per virtual machine and each of those VMDKs on a separate datastore. +++## Next steps ++- [Attach Azure NetApp Files datastores to Azure VMware Solution hosts: Performance best practices ](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md#performance-best-practices) |
azure-percept | Audio Button Led Behavior | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/audio-button-led-behavior.md | |
azure-percept | Azure Percept Audio Datasheet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/azure-percept-audio-datasheet.md | |
azure-percept | Azure Percept Devkit Container Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/azure-percept-devkit-container-release-notes.md | |
azure-percept | Azure Percept Devkit Software Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/azure-percept-devkit-software-release-notes.md | |
azure-percept | Azure Percept Dk Datasheet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/azure-percept-dk-datasheet.md | |
azure-percept | Azure Percept Vision Datasheet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/azure-percept-vision-datasheet.md | |
azure-percept | Azureeyemodule Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/azureeyemodule-overview.md | |
azure-percept | Concept Security Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/concept-security-configuration.md | |
azure-percept | Connect Over Cellular Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/connect-over-cellular-gateway.md | |
azure-percept | Connect Over Cellular Usb Multitech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/connect-over-cellular-usb-multitech.md | |
azure-percept | Connect Over Cellular Usb Quectel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/connect-over-cellular-usb-quectel.md | |
azure-percept | Connect Over Cellular Usb Vodafone | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/connect-over-cellular-usb-vodafone.md | |
azure-percept | Connect Over Cellular Usb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/connect-over-cellular-usb.md | |
azure-percept | Connect Over Cellular | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/connect-over-cellular.md | |
azure-percept | Create And Deploy Manually Azure Precept Devkit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/create-and-deploy-manually-azure-precept-devkit.md | |
azure-percept | Create People Counting Solution With Azure Percept Devkit Vision | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/create-people-counting-solution-with-azure-percept-devkit-vision.md | |
azure-percept | Delete Voice Assistant Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/delete-voice-assistant-application.md | |
azure-percept | Dev Tools Installer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/dev-tools-installer.md | |
azure-percept | How To Capture Images | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-capture-images.md | |
azure-percept | How To Configure Voice Assistant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-configure-voice-assistant.md | |
azure-percept | How To Connect Over Ethernet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-connect-over-ethernet.md | |
azure-percept | How To Connect To Percept Dk Over Serial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-connect-to-percept-dk-over-serial.md | |
azure-percept | How To Deploy Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-deploy-model.md | |
azure-percept | How To Determine Your Update Strategy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-determine-your-update-strategy.md | |
azure-percept | How To Get Hardware Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-get-hardware-support.md | |
azure-percept | How To Manage Voice Assistant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-manage-voice-assistant.md | |
azure-percept | How To Set Up Advanced Network Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-set-up-advanced-network-settings.md | |
azure-percept | How To Set Up Over The Air Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-set-up-over-the-air-updates.md | |
azure-percept | How To Ssh Into Percept Dk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-ssh-into-percept-dk.md | |
azure-percept | How To Troubleshoot Setup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-troubleshoot-setup.md | |
azure-percept | How To Update Over The Air | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-update-over-the-air.md | |
azure-percept | How To Update Via Usb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-update-via-usb.md | |
azure-percept | How To View Telemetry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-view-telemetry.md | |
azure-percept | How To View Video Stream | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-view-video-stream.md | |
azure-percept | Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/known-issues.md | |
azure-percept | Overview 8020 Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-8020-integration.md | |
azure-percept | Overview Advanced Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-advanced-code.md | |
azure-percept | Overview Ai Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-ai-models.md | |
azure-percept | Overview Azure Percept Audio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-azure-percept-audio.md | |
azure-percept | Overview Azure Percept Dk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-azure-percept-dk.md | |
azure-percept | Overview Azure Percept Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-azure-percept-studio.md | |
azure-percept | Overview Azure Percept | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-azure-percept.md | |
azure-percept | Overview Percept Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-percept-security.md | |
azure-percept | Overview Update Experience | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-update-experience.md | |
azure-percept | Quickstart Percept Audio Setup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/quickstart-percept-audio-setup.md | |
azure-percept | Quickstart Percept Dk Set Up | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/quickstart-percept-dk-set-up.md | |
azure-percept | Quickstart Percept Dk Unboxing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/quickstart-percept-dk-unboxing.md | |
azure-percept | Return To Voice Assistant Application Window | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/return-to-voice-assistant-application-window.md | |
azure-percept | Speech Module Interface Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/speech-module-interface-workflow.md | |
azure-percept | Troubleshoot Audio Accessory Speech Module | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/troubleshoot-audio-accessory-speech-module.md | |
azure-percept | Troubleshoot Dev Kit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/troubleshoot-dev-kit.md | |
azure-percept | Tutorial No Code Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/tutorial-no-code-speech.md | |
azure-percept | Tutorial Nocode Vision | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/tutorial-nocode-vision.md | |
azure-percept | Vision Solution Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/vision-solution-troubleshooting.md | |
azure-resource-manager | Template Functions Lambda | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-lambda.md | Last updated 02/06/2023 # Lambda functions for ARM templates -This article describes the lambda functions to use in ARM templates. [Lambda expressions (or lambda functions)](/dotnet/csharp/language-reference/operators/lambda-expressions) are essentially blocks of code that can be passed as an argument. They can take multiple parameters, but are restricted to a single line of code. In ARM templates, lambda expression is in this format: --```json -<lambda variable> => <expression> -``` +This article describes the lambda functions to use in ARM templates. [Lambda expressions (or lambda functions)](/dotnet/csharp/language-reference/operators/lambda-expressions) are essentially blocks of code that can be passed as an argument. They can take multiple parameters, but are restricted to a single line of code. > [!TIP] > We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [deployment](../bicep/bicep-functions-deployment.md) functions. |
azure-video-indexer | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md | Title: Azure Video Indexer release notes | Microsoft Docs description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure Video Indexer. Previously updated : 01/24/2023 Last updated : 02/07/2023 To stay up-to-date with the most recent Azure Video Indexer developments, this a [!INCLUDE [announcement](./includes/deprecation-announcement.md)] +## February 2023 ++### Pricing ++On January 01, 2023 we introduced the Advanced Audio and Video SKU for Advanced presets. This was done on order to report the use of each preset, Basic, Standard & Advanced, with their own distinct meter on the Azure Billing statement. This can also be seen on Azure Cost Analysis reports. ++Starting February 1st, we’re excited to announce a 40% price reduction on the Basic Audio Analysis, Audio Analysis and Video Analysis SKUs. We took into consideration feedback from our customers and market trends to make changes that will benefit them. By reducing prices and introducing a new Advanced SKU, we are providing competitive pricing and more options for customers to balance costs and features. Additionally, as we continue to improve and add more AI capabilities, customers will be able to take advantage of these cost savings when performing new or re-indexing operations. ++This change will be implemented automatically, and customers who already have Azure discounts will continue to receive them in addition to the new pricing. ++| | **Basic Audio Analysis** | **Standard Audio Analysis** | **Advanced Audio Analysis** | **Standard Video Analysis** | **Advanced Video Analysis** | +|-- | | | | | | +| Per input minute | $0.0126 | $0.024 | $0.04 | $0.09 | $0.15 | ++### Network Service Tag ++Video Indexer supports the use of Network Security Tag to allow network traffic from Video Indexer IPs into your network. Starting 22 January, we renamed our Network Security Service tag from `AzureVideoAnalyzerForMedia` to `VideoIndexer`. This change will require you to update your deployment scripts and/or existing configuration. See our [Network Security Documentation](network-security.md) for more info. + ## January 2023 ### Notification experience Significantly reduced number of low-quality face detection occurrences in the UI ### Speakers' names can now be edited from the Azure Video Indexer website -You can now add new speakers, rename identified speakers and modify speakers assigned to a particular transcript line using the [Azure Video Indexer website](https://www.videoindexer.ai/). For details on how to edit speakers from the **Timeline** pane, see [Edit speakers with the Azure Video Indexer website](edit-speakers.md). +You can now add new speakers, rename identified speakers and modify speakers assigned to a particular transcript line using the [Azure Video Indexer website](https://www.videoindexer.ai/). For details on how to edit speakers from the **Timeline** pane, see [Edit speakers with the Azure Video Indexer website](edit-speakers.md). The same capabilities are available from the Azure Video Indexer [upload video index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Video-Index) API. The same capabilities are available from the Azure Video Indexer [upload video i ### A new built-in role: Video Indexer Restricted Viewer -The limited access **Video Indexer Restricted Viewer** role is intended for the [Azure Video Indexer website](https://www.videoindexer.ai/) users. The role's permitted actions relate to the [Azure Video Indexer website](https://www.videoindexer.ai/) experience. +The limited access **Video Indexer Restricted Viewer** role is intended for the [Azure Video Indexer website](https://www.videoindexer.ai/) users. The role's permitted actions relate to the [Azure Video Indexer website](https://www.videoindexer.ai/) experience. For more information, see [Manage access with the Video Indexer Restricted Viewer role](restricted-viewer-role.md). For more information, see [supported languages](language-support.md). ### Edit a speaker's name in the transcription through the API -You can now edit the name of the speakers in the transcription using the Azure Video Indexer API. +You can now edit the name of the speakers in the transcription using the Azure Video Indexer API. ### Word level time annotation with confidence score -Now supporting word level time annotation with confidence score. +Now supporting word level time annotation with confidence score. -An annotation is any type of additional information that is added to an already existing text, be it a transcription of an audio file or an original text file. +An annotation is any type of additional information that is added to an already existing text, be it a transcription of an audio file or an original text file. For more information, see [Examine word-level transcription information](edit-transcript-lines-portal.md#examine-word-level-transcription-information). -### Azure Monitor integration enabling indexing logs +### Azure Monitor integration enabling indexing logs The new set of logs, described below, enables you to better monitor your indexing pipeline. Azure Video Indexer now supports Diagnostics settings for indexing events. You c ### Expanded supported languages in LID and MLID through Azure Video Indexer API -Expanded the languages supported in LID (language identification) and MLID (multi language Identification) using the Azure Video Indexer API. +Expanded the languages supported in LID (language identification) and MLID (multi language Identification) using the Azure Video Indexer API. The following languages are now supported through the API: Arabic (United Arab Emirates), Arabic Modern Standard, Arabic Egypt, Arabic (Iraq), Arabic (Jordan), Arabic (Kuwait), Arabic (Oman), Arabic (Qatar), Arabic (Saudi Arabia), Arabic Syrian Arab Republic, Czech, Danish, German, English Australia, English United Kingdom, English United States, Spanish, Spanish (Mexico), Finnish, French (Canada), French, Hebrew, Hindi, Italian, Japanese, Korean, Norwegian, Dutch, Polish, Portuguese, Portuguese (Portugal), Russian, Swedish, Thai, Turkish, Ukrainian, Vietnamese, Chinese (Simplified), Chinese (Cantonese, Traditional). The new `boundingBoxes` URL parameter controls the option to set bounding boxes ### Control autoplay from the account settings -Control whether a media file will autoplay when opened using the webapp is through the user settings. Navigate to the [Azure Video Indexer website](https://www.videoindexer.ai/) -> the **Gear** icon (the top-right corner) -> **User settings** -> **Auto-play media files**. - +Control whether a media file will autoplay when opened using the webapp is through the user settings. Navigate to the [Azure Video Indexer website](https://www.videoindexer.ai/) -> the **Gear** icon (the top-right corner) -> **User settings** -> **Auto-play media files**. + ### Copy video ID from the player view **Copy video ID** is available when you select the video in the [Azure Video Indexer website](https://www.videoindexer.ai/) You can search or filter the account list using the account name or region. Sele ### General availability of ARM-based accounts -With an Azure Resource Management (ARM) based [paid (unlimited)](accounts-overview.md) accounts, you are able to use: +With an Azure Resource Management (ARM) based [paid (unlimited)](accounts-overview.md) accounts, you are able to use: - [Azure role-based access control (RBAC)](../role-based-access-control/overview.md).-- Managed Identity to better secure the communication between your Azure Media Services and Azure Video Indexer account, Network Service Tags, and native integration with Azure Monitor to monitor your account (audit and indexing logs). -- Scale and automate your [deployment with ARM-template](deploy-with-arm-template.md), [bicep](deploy-with-bicep.md) or terraform. -- [Create logic apps connector for ARM-based accounts](logic-apps-connector-arm-accounts.md). +- Managed Identity to better secure the communication between your Azure Media Services and Azure Video Indexer account, Network Service Tags, and native integration with Azure Monitor to monitor your account (audit and indexing logs). +- Scale and automate your [deployment with ARM-template](deploy-with-arm-template.md), [bicep](deploy-with-bicep.md) or terraform. +- [Create logic apps connector for ARM-based accounts](logic-apps-connector-arm-accounts.md). To create an ARM-based account, see [create an account](create-account-portal.md). ## August 2022 -### Update topic inferencing model +### Update topic inferencing model -Azure Video Indexer topic inferencing model was updated and now we extract more than 6.5 million topics (for example, covering topics such as Covid virus). To benefit from recent model updates you need to re-index your video files. +Azure Video Indexer topic inferencing model was updated and now we extract more than 6.5 million topics (for example, covering topics such as Covid virus). To benefit from recent model updates you need to re-index your video files. -### Topic inferencing model is now available on Azure Government +### Topic inferencing model is now available on Azure Government -You can now leverage topic inferencing model in your Azure Video Indexer paid account on [Azure Government](../azure-government/documentation-government-welcome.md) in Virginia and Arizona regions. With this release we completed the AI parity between Azure global and Azure Government. -To benefit from the model updates you need to re-index your video files. +You can now leverage topic inferencing model in your Azure Video Indexer paid account on [Azure Government](../azure-government/documentation-government-welcome.md) in Virginia and Arizona regions. With this release we completed the AI parity between Azure global and Azure Government. +To benefit from the model updates you need to re-index your video files. ### Session length is now 30 days in the Azure Video Indexer website The [Azure Video Indexer website](https://vi.microsoft.com) session length was e ### The featured clothing insight (preview) -The featured clothing insight enables more targeted ads placement. +The featured clothing insight enables more targeted ads placement. The insight provides information of key items worn by individuals within a video and the timestamp in which the clothing appears. This allows high-quality in-video contextual advertising, where relevant clothing ads are matched with the specific time within the video in which they are viewed. To view the featured clothing of an observed person, you have to index the video ## June 2022 -### Create Video Indexer blade improvements in Azure portal +### Create Video Indexer blade improvements in Azure portal -Azure Video Indexer now supports the creation of new resource using system-assigned managed identity or system and user assigned managed identity for the same resource. +Azure Video Indexer now supports the creation of new resource using system-assigned managed identity or system and user assigned managed identity for the same resource. -You can also change the primary managed identity using the **Identity** tab in the [Azure portal](https://portal.azure.com/#home). +You can also change the primary managed identity using the **Identity** tab in the [Azure portal](https://portal.azure.com/#home). ### Limited access of celebrity recognition and face identification features -As part of Microsoft's commitment to responsible AI, we are designing and releasing Azure Video Indexer – identification and celebrity recognition features. These features are designed to protect the rights of individuals and society and fostering transparent human-computer interaction. Thus, there is a limited access and use of Azure Video Indexer – identification and celebrity recognition features. +As part of Microsoft's commitment to responsible AI, we are designing and releasing Azure Video Indexer – identification and celebrity recognition features. These features are designed to protect the rights of individuals and society and fostering transparent human-computer interaction. Thus, there is a limited access and use of Azure Video Indexer – identification and celebrity recognition features. -Identification and celebrity recognition features require registration and are only available to Microsoft managed customers and partners. -Customers who wish to use this feature are required to apply and submit an [intake form](https://aka.ms/facerecognition). For more information, read [Azure Video Indexer limited access](limited-access-features.md). +Identification and celebrity recognition features require registration and are only available to Microsoft managed customers and partners. +Customers who wish to use this feature are required to apply and submit an [intake form](https://aka.ms/facerecognition). For more information, read [Azure Video Indexer limited access](limited-access-features.md). Also, see the following: the [announcement blog post](https://aka.ms/AAh91ff) and [investment and safeguard for facial recognition](https://aka.ms/AAh9oye).- + ## May 2022 ### Line breaking in transcripts Also, see the following: the [announcement blog post](https://aka.ms/AAh91ff) an Improved line break logic to better split transcript into sentences. New editing capabilities are now available through the Azure Video Indexer website, such as adding a new line and editing the line’s timestamp. For more information, see [Insert or remove transcript lines](edit-transcript-lines-portal.md). ### Azure Monitor integration- + Azure Video Indexer now supports Diagnostics settings for Audit events. Logs of Audit events can now be exported through diagnostics settings to Azure Log Analytics, Storage, Event Hubs, or a third-party solution. The additions enable easier access to analyze the data, monitor resource operation, and create automatically flows to act on an event. For more information, see [Monitor Azure Video Indexer](monitor-video-indexer.md). The additions enable easier access to analyze the data, monitor resource operati Object Character Reader (OCR) is improved by 60%. Face Detection is improved by 20%. Label accuracy is improved by 30% over a wide variety of videos. These improvements are available immediately in all regions and do not require any changes by the customer. -### Service tag +### Service tag Azure Video Indexer is now part of [Network Service Tags](network-security.md). Video Indexer often needs to access other Azure resources (for example, Storage). If you secure your inbound traffic to your resources with a Network Security Group you can now select Video Indexer as part of the built-in Service Tags. This will simplify security management as we populate the Service Tag with our public IPs. -### Celebrity recognition toggle +### Celebrity recognition toggle -You can now enable or disable the celebrity recognition model on the account level (on classic account only). To turn on or off the model, go to the **Model customization** > toggle on/off the model. Once you disable the model, Video Indexer insights will not include the output of celebrity model and will not run the celebrity model pipeline. +You can now enable or disable the celebrity recognition model on the account level (on classic account only). To turn on or off the model, go to the **Model customization** > toggle on/off the model. Once you disable the model, Video Indexer insights will not include the output of celebrity model and will not run the celebrity model pipeline. :::image type="content" source="./media/release-notes/celebrity-recognition.png" alt-text="Screenshot showing the celebrity recognition toggle."::: -### Azure Video Indexer repository name +### Azure Video Indexer repository name As of May 1st, our new updated repository of Azure Video Indexer widget was renamed. Use https://www.npmjs.com/package/@azure/video-indexer-widgets instead -## April 2022 +## April 2022 ### Renamed **Azure Video Analyzer for Media** back to **Azure Video Indexer** -As of today, Azure Video analyzer for Media product name is **Azure Video Indexer** and all product related assets (web portal, marketing materials). It is a backward compatible change that has no implication on APIs and links. **Azure Video Indexer**'s new logo: +As of today, Azure Video analyzer for Media product name is **Azure Video Indexer** and all product related assets (web portal, marketing materials). It is a backward compatible change that has no implication on APIs and links. **Azure Video Indexer**'s new logo: :::image type="content" source="../applied-ai-services/media/video-indexer.svg" alt-text="New logo"::: Azure Video Indexer enables you to include speakers' characteristic based on a c The following improvements were made: * Azure Video Indexer widgets support more than 1 locale in a widget's parameter.-* The Insights widgets support initial search parameters and multiple sorting options. +* The Insights widgets support initial search parameters and multiple sorting options. * The Insights widgets also include a confirmation step before deleting a face to avoid mistakes. * The widget customization now supports width as strings (for example 100%, 100vw). To enable the dark mode open the settings panel and toggle on the **Dark Mode** :::image type="content" source="./media/release-notes/dark-mode.png" alt-text="Dark mode setting"::: -## December 2020 +## December 2020 ### Azure Video Indexer deployed in the Switzerland West and Switzerland North |
azure-vmware | Attach Azure Netapp Files To Azure Vmware Solution Hosts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md | Title: Attach Azure NetApp Files datastores to Azure VMware Solution hosts description: Learn how to create Azure NetApp Files-based NFS datastores for Azure VMware Solution hosts. Previously updated : 01/30/2023 Last updated : 02/03/2023 There are some important best practices to follow for optimal performance of NFS - For optimized performance, choose either **UltraPerformance** gateway or **ErGw3Az** gateway, and enable [FastPath](../expressroute/expressroute-howto-linkvnet-arm.md#configure-expressroute-fastpath) from a private cloud to Azure NetApp Files volumes virtual network. View more detailed information on gateway SKUs at [About ExpressRoute virtual network gateways](../expressroute/expressroute-about-virtual-network-gateways.md). - Based on your performance requirements, select the correct service level needed for the Azure NetApp Files capacity pool. See [Service levels for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-service-levels.md) to understand the throughput allowed per provisioned TiB for each service level. - Create one or more volumes based on the required throughput and capacity. See [Performance considerations](../azure-netapp-files/azure-netapp-files-performance-considerations.md) for Azure NetApp Files to understand how volume size, service level, and capacity pool QoS type will determine volume throughput. For assistance calculating workload capacity and performance requirements, contact your Azure VMware Solution or Azure NetApp Files field expert. The default maximum number of Azure NetApp Files datastores is 64, but it can be increased to a maximum of 256 by submitting a support ticket. To submit a support ticket, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).-- Work with your Microsoft representative to ensure that the Azure VMware Solution private cloud and the Azure NetApp Files volumes are deployed within the same [availability zone](../availability-zones/az-overview.md#availability-zones). Open a support request and ask that the NetApp account be pinned to the availability zone where AVS is deployed.+- Work with your Microsoft representative to ensure that the Azure VMware Solution private cloud and the Azure NetApp Files volumes are deployed within the same [availability zone](../availability-zones/az-overview.md#availability-zones). ++For performance benchmarks that Azure NetApp Files datastores deliver for virtual machines on Azure VMware Solution, see [Azure NetApp Files datastore performance benchmarks for Azure VMware Solution](../azure-netapp-files/performance-benchmarks-azure-vmware-solution.md). > [!IMPORTANT] >Changing the Azure NetApp Files volumes tier after creating the datastore will result in unexpected behavior in portal and API due to metadata mismatch. Set your performance tier of the Azure NetApp Files volume when creating the datastore. If you need to change tier during run time, detach the datastore, change the performance tier of the volume and attach the datastore. We are working on improvements to make this seamless. Now that you've attached a datastore on Azure NetApp Files-based NFS volume to y - [About ExpressRoute virtual network gateways](../expressroute/expressroute-about-virtual-network-gateways.md) - [Understand Azure NetApp Files backup](../azure-netapp-files/backup-introduction.md) - [Guidelines for Azure NetApp Files network planning](../azure-netapp-files/azure-netapp-files-network-topologies.md)+- [Azure NetApp Files datastore performance benchmarks for Azure VMware Solution](../azure-netapp-files/performance-benchmarks-azure-vmware-solution.md) ## FAQs |
azure-vmware | Concepts Hub And Spoke | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-hub-and-spoke.md | Title: Concept - Integrate an Azure VMware Solution deployment in a hub and spok description: Learn about integrating an Azure VMware Solution deployment in a hub and spoke architecture on Azure. Previously updated : 10/24/2022 Last updated : 2/8/2023 The *Hub* is an Azure Virtual Network that acts as a central point of connectivi Traffic between the on-premises datacenter, Azure VMware Solution private cloud, and the Hub goes through Azure ExpressRoute connections. Spoke virtual networks usually contain IaaS based workloads but can have PaaS services like [App Service Environment](../app-service/environment/intro.md), which has direct integration with Virtual Network, or other PaaS services with [Azure Private Link](../private-link/index.yml) enabled. >[!IMPORTANT]->You can use an existing ExpressRoute Gateway to connect to Azure VMware Solution as long as it does not exceed the limit of four ExpressRoute circuits per virtual network. However, to access Azure VMware Solution from on-premises through ExpressRoute, you must have ExpressRoute Global Reach since the ExpressRoute gateway does not provide transitive routing between its connected circuits. +>You can use an existing ExpressRoute Gateway to connect to Azure VMware Solution as long as it does not exceed the limit of four ExpressRoute circuits per virtual network. However, to access Azure VMware Solution from on-premises through ExpressRoute, you must have ExpressRoute Global Reach since the ExpressRoute gateway does not provide transitive routing between its connected circuits. The diagram shows an example of a Hub and Spoke deployment in Azure connected to on-premises and Azure VMware Solution through ExpressRoute Global Reach. |
backup | Backup Azure Private Endpoints Concept | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-private-endpoints-concept.md | + + Title: Private endpoints for Azure Backup - Overview +description: This article explains about the concept of private endpoints for Azure Backup that helps to perform backups while maintaining the security of your resources. ++ Last updated : 02/20/2023+++++# Overview and concepts of private endpoints (v2 experience) for Azure Backup ++Azure Backup allows you to securely perform the backup and restore operations of your data from the Recovery Services vaults using [private endpoints](/azure/private-link/private-endpoint-overview). Private endpoints use one or more private IP addresses from your Azure Virtual Network (VNet), effectively bringing the service into your VNet. ++Azure Backup now provides an enhanced experience in creation and use of private endpoints compared to the [classic experience](private-endpoints-overview.md) (v1). ++This article describes how the [enhanced capabilities of private endpoints](#key-enhancements) for Azure Backup function and help perform backups while maintaining the security of your resources. ++## Key enhancements ++- Create private endpoints without managed identities. +- No private endpoints are created for the blob and queue services. +- Use of fewer private IPs. ++## Before you start ++- While a Recovery Services vault is used by (both) Azure Backup and Azure Site Recovery, this article discusses the use of private endpoints for Azure Backup only. ++- You can create private endpoints for new Recovery Services vaults that don't have any items registered/protected to the vault, only. ++- You can't upgrade vaults (that contains private endpoints) created using the classic experience to the new experience. You can delete all existing private endpoints, and then create new private endpoints with the v2 experience. ++- One virtual network can contain private endpoints for multiple Recovery Services vaults. Also, one Recovery Services vault can have private endpoints for it in multiple virtual networks. However, you can create a maximum of 12 private endpoints for a vault. ++- A private endpoint for a vault uses 10 private IPs, and the count may increase over time. Ensure that you've enough IPs available while creating private endpoints. ++- Private endpoints for Azure Backup donΓÇÖt include access to Azure Active Directory (Azure AD). Ensure that you enable the access so that IPs and FQDNs required for Azure AD to work in a region have outbound access in allowed state in the secured network when performing backup of databases in Azure VMs and backup using the MARS agent. You can also use NSG tags and Azure Firewall tags for allowing access to Azure AD, as applicable. ++- You need to re-register the Recovery Services resource provider with the subscription, if you've registered it before *May 1, 2020*. To re-register the provider, go to *your subscription* in the Azure portal > **Resource provider**, and then select **Microsoft.RecoveryServices** > **Re-register**. ++- [Cross-region restore](backup-create-rs-vault.md#set-cross-region-restore) for SQL and SAP HANA database backups aren't supported, if the vault has private endpoints enabled. ++## Recommended and supported scenarios ++While private endpoints are enabled for the vault, they're used for backup and restore of SQL and SAP HANA workloads in an Azure VM, MARS agent backup and DPM only. You can use the vault for backup of other workloads as well (they won't require private endpoints though). In addition to backup of SQL and SAP HANA workloads and backup using the MARS agent, private endpoints are also used to perform file recovery for Azure VM backup. ++The following table lists the scenarios and recommendations: ++| Scenario | Recommendation | +| | | +| Backup of workloads in Azure VM (SQL, SAP HANA), Backup using MARS Agent, DPM server. | Use of private endpoints is recommended to allow backup and restore without needing to add to an allowlist any IPs/FQDNs for Azure Backup or Azure Storage from your virtual networks. In that scenario, ensure that VMs that host SQL databases can reach Azure AD IPs or FQDNs. | +| Azure VM backup | VM backup doesn't require you to allow access to any IPs or FQDNs. So, it doesn't require private endpoints for backup and restore of disks. <br><br> However, file recovery from a vault containing private endpoints would be restricted to virtual networks that contain a private endpoint for the vault. <br><br> When using ACLΓÇÖed unmanaged disks, ensure the storage account containing the disks allows access to trusted Microsoft services if it's ACL'ed. | +| Azure Files backup | Azure Files backups are stored in the local storage account. So it doesn't require private endpoints for backup and restore. | ++>[!Note] +>- Private endpoints are supported with only DPM server 2022 and later. +>- Private endpoints are currently not supported with MABS. ++## Difference in network connections for private endpoints +++As mentioned above, private endpoints are especially useful for backup of workloads (SQL, SAP HANA) in Azure VMs and MARS agent backups. ++In all the scenarios (with or without private endpoints), both the workload extensions (for backup of SQL and SAP HANA instances running inside Azure VMs) and the MARS agent make connection calls to Azure AD (to FQDNs mentioned under sections 56 and 59 in [Microsoft 365 Common and Office Online](/office365/enterprise/urls-and-ip-address-ranges#microsoft-365-common-and-office-online)). ++In addition to these connections, when the workload extension or MARS agent is installed for Recovery Services vault without private endpoints, connectivity to the following domains is also required: ++| Service | Domain name | +| | | +| Azure Backup | `*.backup.windowsazure.com` | +| Azure Storage | `*.blob.core.windows.net` <br><br> `*.queue.core.windows.net` <br><br> `*.blob.storage.azure.net` | +| Azure Active Directory | Allow access to FQDNs under sections 56 and 59 according to [this article](/office365/enterprise/urls-and-ip-address-ranges#microsoft-365-common-and-office-online). | ++When the workload extension or MARS agent is installed for Recovery Services vault with private endpoint, the following endpoints are communicated: ++| Service | Domain name | +| | | +| Azure Backup | `*.privatelink.<geo>.backup.windowsazure.com` | +| Azure Storage | `*.blob.core.windows.net` <br><br> `*.queue.core.windows.net` <br><br> `*.blob.storage.azure.net` | +| Azure Active Directory | Allow access to FQDNs under sections 56 and 59 according to [this article](/office365/enterprise/urls-and-ip-address-ranges#microsoft-365-common-and-office-online). | ++>[!Note] +>In the above text, `<geo>` refers to the region code (for example, **eus** for East US and **ne** for North Europe). Refer to the following lists for regions codes: +>- [All public clouds](https://download.microsoft.com/download/1/2/6/126a410b-0e06-45ed-b2df-84f353034fa1/AzureRegionCodesList.docx) +>- [China](/azure/china/resources-developer-guide#check-endpoints-in-azure) +>- [Germany](../germany/germany-developer-guide.md#endpoint-mapping) +>- [US Gov](../azure-government/documentation-government-developer-guide.md) ++The storage FQDNs hit in both the scenarios are same. However, for a Recovery Services vault with private endpoint setup, the name resolution for these should return a private IP address. This can be achieved by ++- Azure Private DNS zones +- Custom DNS +- DNS entries in host files +- Conditional forwarders to Azure DNS / Azure Private DNS zones. + +The private IP mappings for the storage account are listed in the private endpoint created for the Recovery Services vault. We recommend using Azure Private DNS zones, as the DNS records for blobs and queues can then be managed by Azure. When new storage accounts are allocated for the vault, the DNS record for their private IP is added automatically in the blob or queue Azure Private DNS zones. ++If you've configured a DNS proxy server, using third-party proxy servers or firewalls, the above domain names must be allowed and redirected to a custom DNS (which has DNS records for the above FQDNs) or to *168.63.129.16* on the Azure virtual network which has private DNS zones linked to it. ++The following example shows Azure firewall used as DNS proxy to redirect the domain name queries for Recovery Services vault, blob, queues and Azure AD to 168.63.129.16. +++For more information, see [Creating and using private endpoints](private-endpoints.md). ++## Network connectivity for vault with private endpoints ++The private endpoint for Recovery Services is associated with a network interface (NIC). For private endpoint connections to work, all the traffic for the Azure service must be redirected to the network interface. You can achieve this by adding DNS mapping for private IP associated with the network interface against the service/blob/queue URL. ++When the workload backup extensions are installed on the virtual machine registered to a Recovery Services vault with a private endpoint, the extension attempts connection on the private URL of the Azure Backup services `<vault_id>.<azure_backup_svc>.privatelink.<geo>.backup.windowsazure.com`. ++If the private URL isn't resolving, it tries the public URL `<azure_backup_svc>.<geo>.backup.windowsazure.com`. If the public network access for Recovery Services vault is configured to *Allow from all networks*, the Recovery Services vault allows the requests coming from the extension over public URLs. If the public network access for Recovery Services vault is configured to *Deny*, the recovery services vault denies the requests coming from the extension over public URLs. ++>[!Note] +>In the above domain names, `<geo>` determines the region code (for example, eus for East US and ne for North Europe). For more information on the region codes, see the following list: +> +>- [All public clouds](https://download.microsoft.com/download/1/2/6/126a410b-0e06-45ed-b2df-84f353034fa1/AzureRegionCodesList.docx) +>- [China](/azure/china/resources-developer-guide#check-endpoints-in-azure) +>- [Germany](/azure/germany/germany-developer-guide#endpoint-mapping) +>- [US Gov](/azure/azure-government/documentation-government-developer-guide) ++These private URLs are specific for the vault. Only extensions and agents registered to the vault can communicate with the Azure Backup service over these endpoints. If the public network access for Recovery Services vault is configured to *Deny*, this restricts the clients that aren't running in the VNet from requesting the backup and restore operations on the vault. We recommend that public network access is set to *Deny* along with private endpoint setup. As the extension and agent attempt the private URL first, the `*.privatelink.<geo>.backup.windowsazure.com` URL should resolve the corresponding private IP associated with the private endpoint. ++There are multiple solutions for DNS resolution ++- Azure Private DNS zones +- Custom DNS +- DNS entries in host files +- Conditional forwarders to Azure DNS / Azure Private DNS zones. ++When the private endpoint for Recovery Services vaults is created via the Azure portal with the *Integrate with private DNS zone* option, the required DNS entries for private IP addresses for the Azure Backup services (`*.privatelink.<geo>backup.windowsazure.com`) are created automatically when the resource is allocated. In other solutions, you need to create the DNS entries manually for these FQDNs in the custom DNS or in the host files. ++For the manual management of DNS records after the VM discovery for communication channel - blob or queue, see [DNS records for blobs and queues (only for custom DNS servers/host files) after the first registration](private-endpoints.md#dns-records-for-blobs-and-queues-only-for-custom-dns-servershost-files-after-the-first-registration). For the manual management of DNS records after the first backup for backup storage account blob, see [DNS records for blobs (only for custom DNS servers/host files) after the first backup](private-endpoints.md#dns-records-for-blobs-only-for-custom-dns-servershost-files-after-the-first-backup). ++The private IP addresses for the FQDNs can be found in the private endpoint pane for the private endpoint created for the Recovery Services vault. ++The following diagram shows how the resolution works when using a private DNS zone to resolve these private service FQDNs. +++The workload extension running on Azure VM requires connection to at least two storage accounts endpoints - the first one is used as communication channel (via queue messages) and second one for storing backup data. The MARS agent requires access to at least one storage account endpoint that is used for storing backup data. ++For a private endpoint enabled vault, the Azure Backup service creates private endpoint for these storage accounts. This prevents any network traffic related to Azure Backup (control plane traffic to service and backup data to storage blob) from leaving the virtual network. +In addition to the Azure Backup cloud services, the workload extension and agent require connectivity to the Azure Storage accounts and Azure Active Directory. ++## Next steps ++- Learn [how to configure and manage private endpoints for Azure Backup](backup-azure-private-endpoints-configure-manage.md). + |
backup | Backup Azure Private Endpoints Configure Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-private-endpoints-configure-manage.md | + + Title: How to create and manage private endpoints (with v2 experience) for Azure Backup +description: This article explains how to configure and manage private endpoints for Azure Backup. ++ Last updated : 02/20/2023+++++# Create and use private endpoints (v2 experience) for Azure Backup ++Azure Backup allows you to securely perform the backup and restore operations of your data from the Recovery Services vaults using [private endpoints](/azure/private-link/private-endpoint-overview). Private endpoints use one or more private IP addresses from your Azure Virtual Network (VNet), effectively bringing the service into your VNet. ++Azure Backup now provides an enhanced experience in creation and use of private endpoints compared to the [classic experience](private-endpoints-overview.md) (v1). ++This article describes how to create and manage private endpoints for Azure Backup in the Recovery Services vault. ++## Create a Recovery Services vault ++You can create private endpoints for Azure Backup only for Recovery Services vaults that don't have any items protected to it (or haven't had any items attempted to be protected or registered to them in the past). So, we recommend you create a new vault for private endpoint configuration. ++For more information on creating a new vault, see [Create and configure a Recovery Services vault](backup-create-rs-vault.md). However, if you have existing vaults that already have private endpoints created, you can recreate private endpoints for them using the enhanced experience. ++## Deny public network access to the vault ++You can configure your vaults to deny access from public networks. ++Follow these steps: ++1. Go to the *vault* > **Networking**. ++2. On the **Public access** tab, select **Deny** to prevent access from public networks. ++ :::image type="content" source="./media/backup-azure-private-endpoints/deny-public-network.png" alt-text="Screenshot showing how to select the Deny option."::: ++ >[!Note] + >Once you deny access, you can still access the vault, but you can't move data to/from networks that don't contain private endpoints. For more information, see [Create private endpoints for Azure Backup](#create-private-endpoints-for-azure-backup). ++3. Select **Apply** to save the changes. ++## Create private endpoints for Azure Backup ++To create private endpoints for Azure Backup, follow these steps: ++1. Go to the **\vault* for which you want to create private endpoints > **Networking**. +2. Go to the **Private access** tab and select **+Private endpoint** to start creating a new private endpoint. ++ :::image type="content" source="./media/backup-azure-private-endpoints/start-new-private-endpoint-creation.png" alt-text="Screenshot showing how to start creating a new private endpoint."::: ++3. On **Create a private endpoint**, provide the required details: ++ a. **Basics**: Provide the basic details for your private endpoints. The region should be the same as the vault and the resource to be backed-up. + + :::image type="content" source="./media/backup-azure-private-endpoints/create-a-private-endpoint.png" alt-text="Screenshot showing the Create a private endpoint page to enter details for endpoint creation."::: ++ b. **Resource**: On this tab, select the PaaS resource for which you want to create your connection, and then select **Microsoft.RecoveryServices/vaults** from the resource type for your required subscription. Once done, choose the name of your Recovery Services vault as the **Resource** and **AzureBackup** as the Target subresource. ++ c. **Virtual network**: On this tab, specify the virtual network and subnet where you want the private endpoint to be created. This is the VNet where the VM is present. ++ d. **DNS**: To connect privately, you need the required DNS records. Based on your network setup, you can choose one of the following: + - Integrate your private endpoint with a private DNS zone: Select Yes if you want to integrate. + - Use your custom DNS server: Select No if you want to use your own DNS server. + e. **Tags**: Optionally, you can add *Tags* for your private endpoint. +4. Select **Review + create**. +5. When the validation is complete, select **Create** to create the private endpoint. ++## Approve private endpoints ++If you're creating the private endpoint as the owner of the Recovery Services vault, the private endpoint you created is auto-approved. Otherwise, the owner of the vault must approve the private endpoint before using it. ++To manually approve private endpoints via the Azure portal, follow these steps: ++1. In your **Recovery Services vault**, go to **Private endpoint connections** on the left pane. +2. Select the *private endpoint connection* that you want to approve. +3. Select **Approve**. ++ :::image type="content" source="./media/backup-azure-private-endpoints/select-private-endpoint-connection-for-approval.png" alt-text="Screenshot showing how to select and approve a private endpoint."::: ++ You can also select Reject or Remove if you want to reject or delete the endpoint connection. ++Learn how to [manually approve private endpoints using the Azure Resource Manager Client](private-endpoints.md#manual-approval-of-private-endpoints-using-the-azure-resource-manager-client) to use the Azure Resource Manager client for approving private endpoints. ++## Manage DNS records ++You need the required DNS records in your private DNS zones or servers to connect privately. You can either integrate your private endpoint directly with Azure private DNS zones, or use your custom DNS servers to achieve this, based on your network preferences. This needs to be done for all three services - Azure Backup, Azure Blobs, and Queues. ++### When you integrate private endpoints with Azure private DNS zones ++If you choose to integrate your private endpoint with private DNS zones, Azure Backup will add the required DNS records. You can view the private DNS zones used under DNS configuration of the private endpoint. If these DNS zones aren't present, they'll be created automatically during the creation of the private endpoint. ++However, you must verify that your virtual network (which contains the resources to be backed-up) is properly linked to all three private DNS zones, as described below. ++>[!Note] +>If you're using proxy servers, you can choose to bypass the proxy server or perform your backups through the proxy server. To bypass a proxy server, continue to the following sections. To use the proxy server for performing your backups, see [proxy server setup details for Recovery Services vault](private-endpoints.md#set-up-proxy-server-for-recovery-services-vault-with-private-endpoint). ++### Validate virtual network links in private DNS zones ++For each private DNS zone listed (for Azure Backup, Blobs and Queues), go to the respective **Virtual network links**. ++You'll see an entry for the virtual network for which you've created the private endpoint. If you don't see an entry, add a virtual network link to all those DNS zones that don't have them. ++### When using custom DNS server or host files ++- If you're using a custom DNS server, you can use conditional forwarder for backup service, blob, and queue FQDNs to redirect the DNS requests to Azure DNS (168.63.129.16). Azure DNS redirects it to Azure Private DNS zone. In such setup, ensure that a virtual network link for Azure Private DNS zone exists as mentioned in [this article](private-endpoints.md#when-using-custom-dns-server-or-host-files). ++ The following table lists the Azure Private DNS zones required by Azure Backup: ++ |Zone |Service | + | | | + |`privatelink.<geo>.backup.windowsazure.com` |Backup | + |`privatelink.blob.core.windows.net` |Blob | + |`privatelink.queue.core.windows.net` |Queue | ++ >[!NOTE] + > In the above text, `<geo>` refers to the region code (for example *eus* and *ne* for East US and North Europe respectively). Refer to the following lists for regions codes: + > + > - [All public clouds](https://download.microsoft.com/download/1/2/6/126a410b-0e06-45ed-b2df-84f353034fa1/AzureRegionCodesList.docx) + > - [China](/azure/china/resources-developer-guide#check-endpoints-in-azure) + > - [Germany](../germany/germany-developer-guide.md#endpoint-mapping) + > - [US Gov](../azure-government/documentation-government-developer-guide.md) + > - [Geo-code list - sample XML](scripts/geo-code-list.md) ++- If you're using custom DNS servers or host files and don't have the Azure Private DNS zone setup, you need to add the DNS records required by the private endpoints to your DNS servers or in the host file. ++ Navigate to the private endpoint you created, and then go to **DNS configuration**. Then add an entry for each FQDN and IP displayed as *Type A* records in your DNS. + + If you're using a host file for name resolution, make corresponding entries in the host file for each IP and FQDN according to the format - `<private ip><space><FQDN>`. ++>[!Note] +>Azure Backup may allocate new storage account for your vault for the backup data, and the extension or agent needs to access the respective endpoints. For more about how to add more DNS records after registration and backup, see [the guidance in Use Private Endpoints for Backup](private-endpoints.md#use-private-endpoints-for-backup). ++++++++## Use private endpoints for backup ++Once the private endpoints created for the vault in your VNet have been approved, you can start using them for performing your backups and restores. ++>[!IMPORTANT] +>Ensure that you've completed all the steps mentioned above in the document successfully before proceeding. To recap, you must have completed the steps in the following checklist: +> +>1. Created a (new) Recovery Services vault +>2. Enabled the vault to use system assigned Managed Identity +>3. Assigned relevant permissions to the Managed Identity of the vault +>4. Created a Private Endpoint for your vault +>5. Approved the Private Endpoint (if not auto approved) +>6. Ensured all DNS records are appropriately added (except blob and queue records for custom servers, which will be discussed in the following sections) ++### Check VM connectivity ++In the VM, in the locked down network, ensure the following: ++1. The VM should have access to Azure AD. +2. Execute **nslookup** on the backup URL (`xxxxxxxx.privatelink.<geo>.backup.windowsazure.com`) from your VM, to ensure connectivity. This should return the private IP assigned in your virtual network. ++### Configure backup ++Once you ensure the above checklist and access to have been successfully completed, you can continue to configure backup of workloads to the vault. If you're using a custom DNS server, you'll need to add DNS entries for blobs and queues that are available after configuring the first backup. ++#### DNS records for blobs and queues (only for custom DNS servers/host files) after the first registration ++After you have configured backup for at least one resource on a private endpoint enabled vault, add the required DNS records for blobs and queues as described below. ++1. Navigate to each of these private endpoints created for the vault and go to **DNS configuration**. +1. Add an entry for each FQDN and IP displayed as *Type A* records in your DNS. ++ If you're using a host file for name resolution, make corresponding entries in the host file for each IP and FQDN according to the format - `<private ip><space><FQDN>`. ++ In addition to the above, there's another entry needed after the first backup, which is [discussed here](private-endpoints.md#dns-records-for-blobs-only-for-custom-dns-servershost-files-after-the-first-backup). ++### Backup and restore of workloads in Azure VM (SQL and SAP HANA) ++Once the private endpoint is created and approved, no other changes are required from the client side to use the private endpoint (unless you're using SQL Availability Groups, which we discuss later in this section). All communication and data transfer from your secured network to the vault will be performed through the private endpoint. However, if you remove private endpoints for the vault after a server (SQL or SAP HANA) has been registered to it, you'll need to re-register the container with the vault. You don't need to stop protection for them. ++#### DNS records for blobs (only for custom DNS servers/host files) after the first backup ++After you run the first backup and you're using a custom DNS server (without conditional forwarding), it's likely that your backup will fail. If that happens: ++1. Navigate to the private endpoint created for the vault and go to **DNS configuration**. +1. Add an entry for each FQDN and IP displayed as *Type A* records in your DNS. ++ If you're using a host file for name resolution, make corresponding entries in the host file for each IP and FQDN according to the format - `<private ip><space><FQDN>`. ++>[!NOTE] +>At this point, you should be able to run **nslookup** from the VM and resolve to private IP addresses when done on the vaultΓÇÖs Backup and Storage URLs. ++### When using SQL Availability Groups ++When using SQL Availability Groups (AG), you'll need to provision conditional forwarding in the custom AG DNS as described below: ++1. Sign in to your domain controller. +1. Under the DNS application, add conditional forwarders for all three DNS zones (Backup, Blobs, and Queues) to the host IP 168.63.129.16 or the custom DNS server IP address, as necessary. The following screenshots show when you're forwarding to the Azure host IP. If you're using your own DNS server, replace with the IP of your DNS server. ++### Back up and restore through MARS agent and DPM server ++When using the MARS Agent to back up your on-premises resources, make sure your on-premises network (containing your resources to be backed up) is peered with the Azure VNet that contains a private endpoint for the vault, so you can use it. You can then continue to install the MARS agent and configure backup as detailed here. However, you must ensure all communication for backup happens through the peered network only. ++But if you remove private endpoints for the vault after a MARS agent has been registered to it, you'll need to re-register the container with the vault. You don't need to stop protection for them. ++>[!NOTE] +> - Private endpoints are supported with only DPM server 2022 and later. +> - Private endpoints are not yet supported with MABS. ++## Deleting private endpoints ++To delete private endpoints using REST API, see [this section](/rest/api/virtualnetwork/privateendpoints/delete). ++## Next steps ++- Learn [about private endpoint for Azure Backup](backup-azure-private-endpoints-concept.md). |
backup | Private Endpoints Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/private-endpoints-overview.md | Title: Private endpoints overview description: Understand the use of private endpoints for Azure Backup and the scenarios where using private endpoints helps maintain the security of your resources. Previously updated : 11/09/2021 Last updated : 02/20/2023 -# Overview and concepts of private endpoints for Azure Backup +# Overview and concepts of private endpoints (v1 experience) for Azure Backup -Azure Backup allows you to securely back up and restore your data from your Recovery Services vaults using [private endpoints](../private-link/private-endpoint-overview.md). Private endpoints use one or more private IP addresses from your Azure Virtual Network (VNet), effectively bringing the service into your VNet. +Azure Backup allows you to back up and restore your data securely from your Recovery Services vaults using [private endpoints](../private-link/private-endpoint-overview.md). Private endpoints use one or more private IP addresses from your Azure Virtual Network (VNet), effectively bringing the service into your VNet. This article will help you understand how private endpoints for Azure Backup work and the scenarios where using private endpoints helps maintain the security of your resources. +>[!Note] +>Azure Backup now provides a new experience for creating private endpoints. [Learn more](backup-azure-private-endpoints-concept.md). + ## Before you start -- Private endpoints can be created for new Recovery Services vaults only (that don't have any items registered to the vault). So private endpoints must be created before you attempt to protect any items to the vault.+- Private endpoints can be created for new Recovery Services vaults only (that doesn't have any items registered to the vault). So private endpoints must be created before you attempt to protect any items to the vault. - One virtual network can contain private endpoints for multiple Recovery Services vaults. Also, one Recovery Services vault can have private endpoints for it in multiple virtual networks. However, the maximum number of private endpoints that can be created for a vault is 12.-- Once a private endpoint is created for a vault, the vault will be locked down. It won't be accessible (for backups and restores) from networks apart from ones that contain a private endpoint for the vault. If all private endpoints for the vault are removed, the vault will be accessible from all networks.+- If the public network access for the vault is set to **Allow from all networks**, the vault allows backups and restores from any machine registered to the vault. If the public network access for the vault is set to **Deny**, the vault only allows backups and restores from the machines registered to the vault that are requesting backups/restores via private IPs allocated for the vault. - A private endpoint connection for Backup uses a total of 11 private IPs in your subnet, including those used by Azure Backup for storage. This number may be higher for certain Azure regions. So we suggest that you have enough private IPs (/26) available when you attempt to create private endpoints for Backup. - While a Recovery Services vault is used by (both) Azure Backup and Azure Site Recovery, this article discusses use of private endpoints for Azure Backup only. - Private endpoints for Backup donΓÇÖt include access to Azure Active Directory (Azure AD) and the same needs to be ensured separately. So, IPs and FQDNs required for Azure AD to work in a region will need outbound access to be allowed from the secured network when performing backup of databases in Azure VMs and backup using the MARS agent. You can also use NSG tags and Azure Firewall tags for allowing access to Azure AD, as applicable. While private endpoints are enabled for the vault, they're used for backup and r ## Difference in network connections due to private endpoints As mentioned above, private endpoints are especially useful for backup of workloads (SQL, SAP HANA) in Azure VMs and MARS agent backups.-In all the scenarios (with or without private endpoints), both the workload extensions (for backup of SQL and SAP HANA instances running inside Azure VMs) and the MARS agent make connection calls to AAD (to FQDNs mentioned under sections 56 and 59 in [Microsoft 365 Common and Office Online](/office365/enterprise/urls-and-ip-address-ranges#microsoft-365-common-and-office-online)). -In addition to these connections when the workload extension or MARS agent is installed for recovery services vault _without private endpoints_, connectivity to the following domains are also required: +In all the scenarios (with or without private endpoints), both the workload extensions (for backup of SQL and SAP HANA instances running inside Azure VMs) and the MARS agent make connection calls to Azure AD (to FQDNs mentioned under sections 56 and 59 in [Microsoft 365 Common and Office Online](/office365/enterprise/urls-and-ip-address-ranges#microsoft-365-common-and-office-online)). ++In addition to these connections when the workload extension or MARS agent is installed for recovery services vault *without private endpoints*, connectivity to the following domains is also required: | Service | Domain names |-| - | | -| Azure Backup | *.backup.windowsazure.com | -| Azure Storage | *.blob.core.windows.net <br> *.queue.core.windows.net <br> *.blob.storage.azure.net | +| | | +| Azure Backup | `*.backup.windowsazure.com` | +| Azure Storage | `*.blob.core.windows.net` <br><br> `*.queue.core.windows.net` <br><br> `*.blob.storage.azure.net` | +| Azure Active Directory (Azure AD) | [Allow access to FQDNs under sections 56 and 59](/microsoft-365/enterprise/urls-and-ip-address-ranges?view=o365-worldwide&preserve-view=true#microsoft-365-common-and-office-online). | When the workload extension or MARS agent is installed for Recovery Services vault with private endpoint, the following endpoints are hit: -| Service | Domain names | -| - | | -| Azure Backup | `*.privatelink.<geo>.backup.windowsazure.com` | -| Azure Storage | *.blob.core.windows.net <br> *.queue.core.windows.net <br> *.blob.storage.azure.net | +| Service | Domain name | +| | | +| Azure Backup | `*.privatelink.<geo>.backup.windowsazure.com` | +| Azure Storage | `*.blob.core.windows.net` <br><br> `*.queue.core.windows.net` <br><br> `*.blob.storage.azure.net` | +| Azure Active Directory (Azure AD) | [Allow access to FQDNs under sections 56 and 59](/microsoft-365/enterprise/urls-and-ip-address-ranges?view=o365-worldwide&preserve-view=true#microsoft-365-common-and-office-online). | >[!Note] >In the above text, `<geo>` refers to the region code (for example, **eus** for East US and **ne** for North Europe). Refer to the following lists for regions codes: When the workload extension or MARS agent is installed for Recovery Services vau >- [Germany](../germany/germany-developer-guide.md#endpoint-mapping) >- [US Gov](../azure-government/documentation-government-developer-guide.md) -The storage FQDNs hit in both the scenarios are same. However, for a Recovery Services vault with private endpoint setup, the name resolution for these should return a private IP address. This can be achieved by using private DNS zones, by creating DNS entries for storage account in host files, or by using conditional forwarders to custom DNS with the respective DNS entries. The private IP mappings for the storage account are listed in the private endpoint blade for the storage account ion the portal. +The storage FQDNs hit in both the scenarios are same. However, for a Recovery Services vault with private endpoint setup, the name resolution for these should return a private IP address. This can be achieved by using: ++- Azure Private DNS zones +- Custom DNS +- DNS entries in host files +- Conditional forwarders to Azure DNS or Azure Private DNS zones. ->The private endpoints for blobs and queues follow a standard nam |