Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Active Directory Technical Profile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/active-directory-technical-profile.md | +<!-- docutune:ignored "AAD-" --> + Azure Active Directory B2C (Azure AD B2C) provides support for the Microsoft Entra user management. This article describes the specifics of a technical profile for interacting with a claims provider that supports this standardized protocol. ## Protocol Azure Active Directory B2C (Azure AD B2C) provides support for the Microsoft Ent The **Name** attribute of the **Protocol** element needs to be set to `Proprietary`. The **handler** attribute must contain the fully qualified name of the protocol handler assembly `Web.TPEngine.Providers.AzureActiveDirectoryProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null`. Following [custom policy starter pack](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack) Microsoft Entra technical profiles include the **AAD-Common** technical profile. The Microsoft Entra technical profiles don't specify the protocol because the protocol is configured in the **AAD-Common** technical profile:- + - **AAD-UserReadUsingAlternativeSecurityId** and **AAD-UserReadUsingAlternativeSecurityId-NoError** - Look up a social account in the directory. - **AAD-UserWriteUsingAlternativeSecurityId** - Create a new social account. - **AAD-UserReadUsingEmailAddress** - Look up a local account in the directory. |
active-directory-b2c | Azure Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/azure-monitor.md | Watch this video to learn how to configure monitoring for Azure AD B2C using Azu ## Deployment overview -Azure AD B2C uses [Microsoft Entra ID monitoring](../active-directory/reports-monitoring/overview-monitoring-health.md). Unlike Microsoft Entra tenants, an Azure AD B2C tenant can't have a subscription associated with it. So, we need to take extra steps to enable the integration between Azure AD B2C and Log Analytics, which is where we send the logs. +Azure AD B2C uses [Microsoft Entra monitoring](../active-directory/reports-monitoring/overview-monitoring-health.md). Unlike Microsoft Entra tenants, an Azure AD B2C tenant can't have a subscription associated with it. So, we need to take extra steps to enable the integration between Azure AD B2C and Log Analytics, which is where we send the logs. To enable _Diagnostic settings_ in Microsoft Entra ID within your Azure AD B2C tenant, you use [Azure Lighthouse](../lighthouse/overview.md) to [delegate a resource](../lighthouse/concepts/architecture.md), which allows your Azure AD B2C (the **Service Provider**) to manage a Microsoft Entra ID (the **Customer**) resource. > [!TIP] To stop collecting logs to your Log Analytics workspace, delete the diagnostic s - For more information about adding and configuring diagnostic settings in Azure Monitor, see [Tutorial: Collect and analyze resource logs from an Azure resource](../azure-monitor/essentials/monitor-azure-resource.md). -- For information about streaming Microsoft Entra ID logs to an event hub, see [Tutorial: Stream Microsoft Entra ID logs to an Azure event hub](../active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md).+- For information about streaming Microsoft Entra logs to an event hub, see [Tutorial: Stream Microsoft Entra logs to an Azure event hub](../active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md). |
active-directory-b2c | Custom Policies Series Sign Up Or Sign In | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-sign-up-or-sign-in.md | When the custom policy runs: - **Orchestration Step 4** - This step runs if the user signs up (objectId doesn't exist), so we display the sign-up form by invoking the *UserInformationCollector* self-asserted technical profile. This step runs whether a user signs up or signs in. -- **Orchestration Step 5** - This step reads account information from Microsoft Entra ID (we invoke *AAD-UserRead* Microsoft Entra technical profile), so it runs whether a user signs up or signs in. +- **Orchestration Step 5** - This step reads account information from Microsoft Entra ID (we invoke `AAD-UserRead` Microsoft Entra technical profile), so it runs whether a user signs up or signs in. - **Orchestration Step 6** - This step invokes the *UserInputMessageClaimGenerator* technical profile to assemble the userΓÇÖs greeting message. |
active-directory-b2c | Customize Ui | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/customize-ui.md | If you'd like to brand all pages in the user flow, set the page layout version f ## Enable company branding in custom policy pages -Once you've configured company branding, enable it in your custom policy. Configure the [page layout version](contentdefinitions.md#migrating-to-page-layout) with page `contract` version for *all* of the content definitions in your custom policy. The format of the value must contain the word `contract`: _urn:com:microsoft:aad:b2c:elements:**contract**:page-name:version_. To specify a page layout in your custom policies that use an old **DataUri** value. For more information, learn how to [migrate to page layout](contentdefinitions.md#migrating-to-page-layout) with page version. +Once you've configured company branding, enable it in your custom policy. Configure the [page layout version](contentdefinitions.md#migrating-to-page-layout) with page `contract` version for *all* of the content definitions in your custom policy. The format of the value must contain the word `contract`: *urn:com:microsoft:aad:b2c:elements:**contract**:page-name:version*. To specify a page layout in your custom policies that use an old **DataUri** value. For more information, learn how to [migrate to page layout](contentdefinitions.md#migrating-to-page-layout) with page version. The following example shows the content definitions with their corresponding the page contract, and *Ocean Blue* page template: |
active-directory-b2c | Force Password Reset | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/force-password-reset.md | Once a password expiration policy has been set, you must also configure force pa ### Password expiry duration -By default, the password is set not to expire. However, the value is configurable by using the [Set-MsolPasswordPolicy](/powershell/module/msonline/set-msolpasswordpolicy) cmdlet from the Azure AD Module for Windows PowerShell. This command updates the tenant, so that all users' passwords expire after number of days you configure. +By default, the password is set not to expire. However, the value is configurable by using the [Set-MsolPasswordPolicy](/powershell/module/msonline/set-msolpasswordpolicy) cmdlet from the Azure AD PowerShell module. This command updates the tenant, so that all users' passwords expire after number of days you configure. ## Next steps |
active-directory-b2c | Javascript And Page Layout | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/javascript-and-page-layout.md | For information about the different page layout versions, see the [Page layout v To specify a page layout version for your custom policy pages: 1. Select a [page layout](contentdefinitions.md#select-a-page-layout) for the user interface elements of your application.-1. Define a [page layout version](contentdefinitions.md#migrating-to-page-layout) with page `contract` version for *all* of the content definitions in your custom policy. The format of the value must contain the word `contract`: _urn:com:microsoft:aad:b2c:elements:**contract**:page-name:version_. +1. Define a [page layout version](contentdefinitions.md#migrating-to-page-layout) with page `contract` version for *all* of the content definitions in your custom policy. The format of the value must contain the word `contract`: *urn:com:microsoft:aad:b2c:elements:**contract**:page-name:version*. The following example shows the content definition identifiers and the corresponding **DataUri** with page contract: |
active-directory-b2c | Partner Bindid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-bindid.md | For additional information, review the following articles: - [Azure AD B2C custom policy overview](custom-policy-overview.md) - [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)-- [TransmitSecurity/azure-ad-b2c-bindid-integration](https://github.com/TransmitSecurity/azure-ad-b2c-bindid-integration) See, Azure AD B2C Integration+- [`TransmitSecurity/azure-ad-b2c-bindid-integration`](https://github.com/TransmitSecurity/azure-ad-b2c-bindid-integration) See, Azure AD B2C Integration |
active-directory-b2c | Partner Hypr | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-hypr.md | The following architecture diagram shows the implementation. ## Configure the Azure AD B2C policy -1. Go to [Azure-AD-B2C-HYPR-Sample/policy/](https://github.com/HYPR-Corp-Public/Azure-AD-B2C-HYPR-Sample/tree/master/policy). +1. Go to [`Azure-AD-B2C-HYPR-Sample/policy/`](https://github.com/HYPR-Corp-Public/Azure-AD-B2C-HYPR-Sample/tree/master/policy). 2. Follow the instructions in [Custom policy starter pack](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack) to download [Active-directory-b2c-custom-policy-starterpack/LocalAccounts/](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/LocalAccounts) 3. Configure the policy for the Azure AD B2C tenant. |
active-directory-b2c | Partner Saviynt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-saviynt.md | Enable Saviynt to perform user delete operations in Azure AD B2C. Learn more: [Application and service principal objects in Microsoft Entra ID](../active-directory/develop/app-objects-and-service-principals.md) -1. Install the latest version of Microsoft Graph PowerShell Module on a Windows workstation or server. +1. Install the latest version of the Microsoft Graph PowerShell module on a Windows workstation or server. For more information, see [Microsoft Graph PowerShell documentation](/powershell/microsoftgraph/). |
active-directory-b2c | Partner Typingdna | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-typingdna.md | These thresholds should be adjusted on your use case. 2. Replace all instances of `apiKey` and `apiSecret` in [TypingDNA-API-Interface](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/TypingDNA/source-code/TypingDNA-API-Interface) solution with the credentials from your TypingDNA dashboard 3. Host the HTML files at your provider of choice following the CORS requirements [here](./customize-ui-with-html.md#3-configure-cors) 4. Replace the LoadURI elements for the `api.selfasserted.tdnasignup` and `api.selfasserted.tdnasignin` content definitions in the `TrustFrameworkExtensions.xml` file to the URI of your hosted HTML files respectively.-5. Create a B2C policy key under identity experience framework in the Microsoft Entra ID blade in the **Azure portal**. Use the `Generate` option and name this key `tdnaHashedId`. +5. Create a B2C policy key under identity experience framework in the Microsoft Entra blade in the Azure portal. Use the `Generate` option and name this key `tdnaHashedId`. 6. Replace the TenantId's in the policy files 7. Replace the ServiceURLs in all TypingDNA REST API technical profiles (REST-TDNA-VerifyUser, REST-TDNA-SaveUser, REST-TDNA-CheckUser) with the endpoint for your [TypingDNA-API-Interface API](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/TypingDNA/source-code/TypingDNA-API-Interface). |
active-directory-b2c | User Flow Custom Attributes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-flow-custom-attributes.md | Extension attributes can only be registered on an application object, even thoug ## Modify your custom policy -To enable custom attributes in your policy, provide **Application ID** and Application **Object ID** in the AAD-Common technical profile metadata. The *AAD-Common* technical profile is found in the base [Microsoft Entra ID](active-directory-technical-profile.md) technical profile, and provides support for Microsoft Entra user management. Other Microsoft Entra technical profiles include the AAD-Common to use its configuration. Override the AAD-Common technical profile in the extension file. +To enable custom attributes in your policy, provide **Application ID** and Application **Object ID** in the **AAD-Common** technical profile metadata. The **AAD-Common*** technical profile is found in the base [Microsoft Entra ID](active-directory-technical-profile.md) technical profile, and provides support for Microsoft Entra user management. Other Microsoft Entra technical profiles include **AAD-Common** to use its configuration. Override the **AAD-Common** technical profile in the extension file. 1. Open the extensions file of your policy. For example, <em>`SocialAndLocalAccounts/`**`TrustFrameworkExtensions.xml`**</em>. 1. Find the ClaimsProviders element. Add a new ClaimsProvider to the ClaimsProviders element. |
active-directory-b2c | User Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-migration.md | If the accounts you're migrating have weaker password strength than the [strong ## Next steps -The [azure-ad-b2c/user-migration](https://github.com/azure-ad-b2c/user-migration) repository on GitHub contains a seamless migration custom policy example and REST API code sample: +The [`azure-ad-b2c/user-migration`](https://github.com/azure-ad-b2c/user-migration) repository on GitHub contains a seamless migration custom policy example and REST API code sample: -[Seamless user migration custom policy & REST API code sample](https://aka.ms/b2c-account-seamless-migration) +[Seamless user migration custom policy and REST API code sample](https://aka.ms/b2c-account-seamless-migration) |
active-directory-b2c | View Audit Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/view-audit-logs.md | To download the list of activity events in a comma-separated values (CSV) file, <a name='get-audit-logs-with-the-azure-ad-reporting-api'></a> -## Get audit logs with the Microsoft Entra ID reporting API +## Get audit logs with the Microsoft Entra reporting API -Audit logs are published to the same pipeline as other activities for Microsoft Entra ID, so they can be accessed through the [Microsoft Entra ID reporting API](/graph/api/directoryaudit-list). For more information, see [Get started with the Microsoft Entra ID reporting API](../active-directory/reports-monitoring/howto-configure-prerequisites-for-reporting-api.md). +Audit logs are published to the same pipeline as other activities for Microsoft Entra ID, so they can be accessed through the [Microsoft Entra reporting API](/graph/api/directoryaudit-list). For more information, see [Get started with the Microsoft Entra reporting API](../active-directory/reports-monitoring/howto-configure-prerequisites-for-reporting-api.md). ### Enable reporting API access |
active-directory-domain-services | Alert Service Principal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/alert-service-principal.md | If a required service principal is deleted, the Azure platform can't perform aut To check which service principal is missing and must be recreated, complete the following steps: 1. In the [Microsoft Entra admin center](https://entra.microsoft.com), search for and select **Enterprise applications**. Choose *All applications* from the **Application Type** drop-down menu, then select **Apply**.-1. Search for each of the following application IDs. For Azure Global, search for AppId value *2565bd9d-da50-47d4-8b85-4c97f669dc36*. For other Azure clouds, search for AppId value *6ba9a5d4-8456-4118-b521-9c5ca10cdf84*. If no existing application is found, follow the *Resolution* steps to create the service principal or re-register the namespace. +1. Search for each of the following application IDs. For Azure Global, search for AppId value `2565bd9d-da50-47d4-8b85-4c97f669dc36`. For other Azure clouds, search for AppId value `6ba9a5d4-8456-4118-b521-9c5ca10cdf84`. If no existing application is found, follow the *Resolution* steps to create the service principal or re-register the namespace. | Application ID | Resolution | | : | : | | 2565bd9d-da50-47d4-8b85-4c97f669dc36 | [Recreate a missing service principal](#recreate-a-missing-service-principal) |- | 443155a6-77f3-45e3-882b-22b3a8d431fb | [Re-register the Microsoft.AAD namespace](#re-register-the-microsoft-aad-namespace) | - | abba844e-bc0e-44b0-947a-dc74e5d09022 | [Re-register the Microsoft.AAD namespace](#re-register-the-microsoft-aad-namespace) | - | d87dcbc6-a371-462e-88e3-28ad15ec4e64 | [Re-register the Microsoft.AAD namespace](#re-register-the-microsoft-aad-namespace) | + | 443155a6-77f3-45e3-882b-22b3a8d431fb | [Re-register the `Microsoft.AAD` namespace](#re-register-the-microsoft-aad-namespace) | + | abba844e-bc0e-44b0-947a-dc74e5d09022 | [Re-register the `Microsoft.AAD` namespace](#re-register-the-microsoft-aad-namespace) | + | d87dcbc6-a371-462e-88e3-28ad15ec4e64 | [Re-register the `Microsoft.AAD` namespace](#re-register-the-microsoft-aad-namespace) | ### Recreate a missing Service Principal The managed domain's health automatically updates itself within two hours and re ### Re-register the Microsoft Entra namespace -If application ID *443155a6-77f3-45e3-882b-22b3a8d431fb*, *abba844e-bc0e-44b0-947a-dc74e5d09022*, or *d87dcbc6-a371-462e-88e3-28ad15ec4e64* is missing from your Microsoft Entra directory, complete the following steps to re-register the *Microsoft.AAD* resource provider: +If application ID `443155a6-77f3-45e3-882b-22b3a8d431fb`, `abba844e-bc0e-44b0-947a-dc74e5d09022`, or `d87dcbc6-a371-462e-88e3-28ad15ec4e64` is missing from your Microsoft Entra directory, complete the following steps to re-register the `Microsoft.AAD` resource provider: 1. In the [Microsoft Entra admin center](https://entra.microsoft.com), search for and select **Subscriptions**. 1. Choose the subscription associated with your managed domain. 1. From the left-hand navigation, choose **Resource Providers**.-1. Search for *Microsoft.AAD*, then select **Re-register**. +1. Search for `Microsoft.AAD`, then select **Re-register**. The managed domain's health automatically updates itself within two hours and removes the alert. |
active-directory-domain-services | Create Gmsa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/create-gmsa.md | -Instead, a group managed service account (gMSA) can be created in the Microsoft Entra Domain ServiceS managed domain. The Windows OS automatically manages the credentials for a gMSA, which simplifies the management of large groups of resources. +Instead, a group managed service account (gMSA) can be created in the Microsoft Entra Domain Services managed domain. The Windows OS automatically manages the credentials for a gMSA, which simplifies the management of large groups of resources. This article shows you how to create a gMSA in a managed domain using Azure PowerShell. |
active-directory-domain-services | How To Data Retrieval | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/how-to-data-retrieval.md | This document describes how to retrieve data from Microsoft Entra Domain Service ## Use Microsoft Entra ID to create, read, update, and delete user objects -You can create a user in the Microsoft Entra portal or by using Graph PowerShell or Graph API. You can also read, update, and delete users. The next sections show how to do these operations in the Microsoft Entra portal. +You can create a user in the Microsoft Entra admin center or by using Graph PowerShell or Graph API. You can also read, update, and delete users. The next sections show how to do these operations in the Microsoft Entra admin center. ### Create, read, or update a user -You can create a new user using the Microsoft Entra portal. +You can create a new user using the Microsoft Entra admin center. To add a new user, follow these steps: 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../active-directory/roles/permissions-reference.md#user-administrator). When a user is deleted, any licenses consumed by the user are made available for <a name='use-rsat-tools-to-connect-to-an-azure-ad-ds-managed-domain-and-view-users'></a> -## Use RSAT tools to connect to a Microsoft Entra DS managed domain and view users +## Use RSAT tools to connect to a Microsoft Entra Domain Services managed domain and view users Sign in to an administrative workstation with a user account that's a member of the *AAD DC Administrators* group. The following steps require installation of [Remote Server Administration Tools (RSAT)](tutorial-create-management-vm.md#install-active-directory-administrative-tools). Sign in to an administrative workstation with a user account that's a member of In the following example output, a user account named *Contoso Admin* and a group for *AAD DC Administrators* are shown in this container. - ![View the list of Microsoft Entra DS domain users in the Active Directory Administrative Center](./media/tutorial-create-management-vm/list-azure-ad-users.png) + ![View the list of Microsoft Entra Domain Services domain users in the Active Directory Administrative Center](./media/tutorial-create-management-vm/list-azure-ad-users.png) 1. To see the computers that are joined to the managed domain, select the **AADDC Computers** container. An entry for the current virtual machine, such as *myVM*, is listed. Computer accounts for all devices that are joined to the managed domain are stored in this *AADDC Computers* container. You can also use the *Active Directory Module for Windows PowerShell*, installed as part of the administrative tools, to manage common actions in your managed domain. ## Next steps-* [Microsoft Entra DS Overview](overview.md) +* [Microsoft Entra Domain Services Overview](overview.md) |
active-directory-domain-services | Synchronization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/synchronization.md | Title: How synchronization works in Microsoft Entra Domain Services | Microsoft Docs -description: Learn how the synchronization process works between Microsoft Entra or an on-premises environment to a Microsoft Entra Domain Services managed domain. +description: Learn how the synchronization process works between Microsoft Entra ID or an on-premises environment to a Microsoft Entra Domain Services managed domain. |
active-directory | On Premises Ecma Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-ecma-troubleshoot.md | By default, the agent emits minimal error messages and stack trace information. To gather more information for troubleshooting agent-related problems: - 1. Install the AADCloudSyncTools PowerShell module as described in [AADCloudSyncTools PowerShell Module for Microsoft Entra Connect cloud sync](../hybrid/cloud-sync/reference-powershell.md#install-the-aadcloudsynctools-powershell-module). + 1. Install the `AADCloudSyncTools` PowerShell module as described in [`AADCloudSyncTools` PowerShell module for Microsoft Entra Connect cloud sync](../hybrid/cloud-sync/reference-powershell.md#install-the-aadcloudsynctools-powershell-module). 2. Use the `Export-AADCloudSyncToolsLogs` PowerShell cmdlet to capture the information. Use the following switches to fine-tune your data collection. Use: - **SkipVerboseTrace** to only export current logs without capturing verbose logs (default = false). |
active-directory | On Premises Powershell Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-powershell-connector.md | The connector provides a bridge between the capabilities of the ECMA Connector H If you have already downloaded the provisioning agent and configured it for another on-premises application, then continue reading in the next section. 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator).-1. Browse to **Identity** > **Hybrid management** > **Azure AD Connect** > **Cloud Sync** > **Agents**. +1. Browse to **Identity** > **Hybrid management** > **Microsoft Entra Connect** > **Cloud Sync** > **Agents**. :::image type="content" source="../../../includes/media/active-directory-cloud-sync-how-to-install/new-ux-1.png" alt-text="Screenshot of new UX screen." lightbox="../../../includes/media/active-directory-cloud-sync-how-to-install/new-ux-1.png"::: 1. Select **Download on-premises agent**, review the terms of service, then select **Accept terms & download**. > [!NOTE]- > Please use different provisioning agents for on-premises application provisioning and Azure AD Connect Cloud Sync / HR-driven provisioning. All three scenarios should not be managed on the same agent. + > Please use different provisioning agents for on-premises application provisioning and Microsoft Entra Connect Cloud Sync / HR-driven provisioning. All three scenarios should not be managed on the same agent. 1. Open the provisioning agent installer, agree to the terms of service, and select **next**. 1. When the provisioning agent wizard opens, continue to the **Select Extension** tab and select **On-premises application provisioning** when prompted for the extension you want to enable.-1. The provisioning agent uses the operating system's web browser to display a popup window for you to authenticate to Azure AD, and potentially also your organization's identity provider. If you are using Internet Explorer as the browser on Windows Server, then you may need to add Microsoft web sites to your browser's trusted site list to allow JavaScript to run correctly. -1. Provide credentials for an Azure AD administrator when you're prompted to authorize. The user is required to have the Hybrid Identity Administrator or Global Administrator role. +1. The provisioning agent uses the operating system's web browser to display a popup window for you to authenticate to Microsoft Entra ID, and potentially also your organization's identity provider. If you are using Internet Explorer as the browser on Windows Server, then you may need to add Microsoft web sites to your browser's trusted site list to allow JavaScript to run correctly. +1. Provide credentials for a Microsoft Entra administrator when you're prompted to authorize. The user is required to have the Hybrid Identity Administrator or Global Administrator role. 1. Select **Confirm** to confirm the setting. Once installation is successful, you can select **Exit**, and also close the Provisioning Agent Package installer. ## Configure the On-premises ECMA app Follow these steps to confirm that the connector host has started and has identi 1. Enter the **Secret Token** value that you defined when you created the connector. > [!NOTE]- > If you just assigned the agent to the application, please wait 10 minutes for the registration to complete. The connectivity test won't work until the registration completes. Forcing the agent registration to complete by restarting the provisioning agent on your server can speed up the registration process. Go to your server, search for **services** in the Windows search bar, identify the **Azure AD Connect Provisioning Agent** service, right-click the service, and restart. + > If you just assigned the agent to the application, please wait 10 minutes for the registration to complete. The connectivity test won't work until the registration completes. Forcing the agent registration to complete by restarting the provisioning agent on your server can speed up the registration process. Go to your server, search for **services** in the Windows search bar, identify the **Microsoft Entra Connect Provisioning Agent** service, right-click the service, and restart. 1. Select **Test Connection**, and wait one minute. 1. After the connection test is successful and indicates that the supplied credentials are authorized to enable provisioning, select **Save**. Return to the web browser window where you were configuring the application prov 1. Enter the **Secret Token** value that you defined when you created the connector. > [!NOTE]- > If you just assigned the agent to the application, please wait 10 minutes for the registration to complete. The connectivity test won't work until the registration completes. Forcing the agent registration to complete by restarting the provisioning agent on your server can speed up the registration process. Go to your server, search for **services** in the Windows search bar, identify the **Azure AD Connect Provisioning Agent Service**, right-click the service, and restart. + > If you just assigned the agent to the application, please wait 10 minutes for the registration to complete. The connectivity test won't work until the registration completes. Forcing the agent registration to complete by restarting the provisioning agent on your server can speed up the registration process. Go to your server, search for **services** in the Windows search bar, identify the **Microsoft Entra Connect Provisioning Agent Service**, right-click the service, and restart. 1. Select **Test Connection**, and wait one minute. 1. After the connection test is successful and indicates that the supplied credentials are authorized to enable provisioning, select **Save**. You'll use the Azure portal to configure the mapping between the Microsoft Entra 1. Select the **On-premises ECMA app** application. 1. Select **Provisioning**. 1. Select **Edit provisioning**, and wait 10 seconds.-1. Expand **Mappings** and select **Provision Azure Active Directory Users**. If this is the first time you've configured the attribute mappings for this application, there will be only one mapping present, for a placeholder. -1. To confirm that the schema is available in Azure AD, select the **Show advanced options** checkbox and select **Edit attribute list for ScimOnPremises**. Ensure that all the attributes selected in the configuration wizard are listed. If not, then wait several minutes for the schema to refresh, and then reload the page. Once you see the attributes listed, then cancel from this page to return to the mappings list. +1. Expand **Mappings** and select **Provision Microsoft Entra users**. If this is the first time you've configured the attribute mappings for this application, there will be only one mapping present, for a placeholder. +1. To confirm that the schema is available in Microsoft Entra ID, select the **Show advanced options** checkbox and select **Edit attribute list for ScimOnPremises**. Ensure that all the attributes selected in the configuration wizard are listed. If not, then wait several minutes for the schema to refresh, and then reload the page. Once you see the attributes listed, then cancel from this page to return to the mappings list. 1. Now, on the click on the **userPrincipalName** PLACEHOLDER mapping. This mapping is added by default when you first configure on-premises provisioning. Change the value to match the following: |Mapping type|Source attribute|Target attribute| |
active-directory | On Premises Scim Provisioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-scim-provisioning.md | The Microsoft Entra provisioning service supports a [SCIM 2.0](https://techcommu If you have already downloaded the provisioning agent and configured it for another on-premises application, then continue reading in the next section. 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator).-1. Browse to **Identity** > **Hybrid management** > **Azure AD Connect** > **Cloud Sync** > **Agents**. +1. Browse to **Identity** > **Hybrid management** > **Microsoft Entra Connect** > **Cloud Sync** > **Agents**. :::image type="content" source="../../../includes/media/active-directory-cloud-sync-how-to-install/new-ux-1.png" alt-text="Screenshot of new UX screen." lightbox="../../../includes/media/active-directory-cloud-sync-how-to-install/new-ux-1.png"::: Once the agent is installed, no further configuration is necessary on-premises, 1. From the left hand menu navigate to the **Provisioning** option and select **Get started**. 1. Select **Automatic** from the dropdown list and expand the **On-Premises Connectivity** option. 1. Select the agent that you installed from the dropdown list and select **Assign Agent(s)**.-1. Now either wait 10 minutes or restart the **Microsoft Azure AD Connect Provisioning Agent** before proceeding to the next step & testing the connection. +1. Now either wait 10 minutes or restart the **Microsoft Entra Connect Provisioning Agent** before proceeding to the next step & testing the connection. 1. In the **Tenant URL** field, provide the SCIM endpoint URL for your application. The URL is typically unique to each target application and must be resolvable by DNS. An example for a scenario where the agent is installed on the same host as the application is https://localhost:8585/scim ![Screenshot that shows assigning an agent.](./media/on-premises-scim-provisioning/scim-2.png) |
active-directory | Plan Cloud Hr Provision | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-cloud-hr-provision.md | To review these events and all other activities performed by the provisioning se #### Azure Monitor logs -All activities performed by the provisioning service are recorded in the Microsoft Entra audit logs. You can route Microsoft Entra audit logs to Azure Monitor logs for further analysis. With Azure Monitor logs (also known as Log Analytics workspace), you can query data to find events, analyze trends, and perform correlation across various data sources. Watch this [video](https://youtu.be/MP5IaCTwkQg) to learn the benefits of using Azure Monitor logs for Microsoft Entra ID logs in practical user scenarios. +All activities performed by the provisioning service are recorded in the Microsoft Entra audit logs. You can route Microsoft Entra audit logs to Azure Monitor logs for further analysis. With Azure Monitor logs (also known as Log Analytics workspace), you can query data to find events, analyze trends, and perform correlation across various data sources. Watch this [video](https://youtu.be/MP5IaCTwkQg) to learn the benefits of using Azure Monitor logs for Microsoft Entra logs in practical user scenarios. Install the [log analytics views for Microsoft Entra activity logs](../../azure-monitor/visualize/workbooks-view-designer-conversion-overview.md) to get access to [prebuilt reports](https://github.com/AzureAD/Deployment-Plans/tree/master/Log%20Analytics%20Views) around provisioning events in your environment. |
active-directory | Provision On Demand | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/provision-on-demand.md | Use on-demand provisioning to provision a user or group in seconds. Among other 6. Search for a user by first name, last name, display name, user principal name, or email address. Alternatively, you can search for a group and pick up to five users. > [!NOTE]- > For Cloud HR provisioning app (Workday/SuccessFactors to AD/Azure AD), the input value is different. + > For Cloud HR provisioning app (Workday / SuccessFactors to Active Directory / Microsoft Entra ID), the input value is different. > For Workday scenario, please provide "WorkerID" or "WID" of the user in Workday. > For SuccessFactors scenario, please provide "personIdExternal" of the user in SuccessFactors. There are currently a few known limitations to on-demand provisioning. Post your ::: zone pivot="cross-tenant-synchronization" * On-demand provisioning of groups is not supported for cross-tenant synchronization. ::: zone-end-* On-demand provisioning supports provisioning one user at a time through the Microsoft Entra portal. +* On-demand provisioning supports provisioning one user at a time through the Microsoft Entra admin center. * Restoring a previously soft-deleted user in the target tenant with on-demand provisioning isn't supported. If you try to soft-delete a user with on-demand provisioning and then restore the user, it can result in duplicate users. * On-demand provisioning of roles isn't supported. * On-demand provisioning supports disabling users that have been unassigned from the application. However, it doesn't support disabling or deleting users that have been disabled or deleted from Microsoft Entra ID. Those users don't appear when you search for a user. |
active-directory | Sap Successfactors Attribute Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/sap-successfactors-attribute-reference.md | In this article, you'll find information on: The table below captures the list of SuccessFactors attributes included by default in the following two provisioning apps: - [SuccessFactors to Active Directory User Provisioning](../saas-apps/sap-successfactors-inbound-provisioning-tutorial.md)-- [SuccessFactors to Microsoft Entra User Provisioning](../saas-apps/sap-successfactors-inbound-provisioning-cloud-only-tutorial.md)+- [SuccessFactors to Microsoft Entra user provisioning](../saas-apps/sap-successfactors-inbound-provisioning-cloud-only-tutorial.md) Please refer to the [SAP SuccessFactors integration reference](./sap-successfactors-integration-reference.md#retrieving-more-attributes) to extend the schema for additional attributes. Please refer to the [SAP SuccessFactors integration reference](./sap-successfact ## Default attribute mapping -The table below provides the default attribute mapping between SuccessFactors attributes listed above and AD/Azure AD attributes. In the Microsoft Entra provisioning app "Mapping" blade, you can modify this default mapping to include attributes from the list above. +The table below provides the default attribute mapping between SuccessFactors attributes listed above and Active Directory / Microsoft Entra attributes. In the Microsoft Entra provisioning app "Mapping" blade, you can modify this default mapping to include attributes from the list above. -| \# | SuccessFactors Entity | SuccessFactors Attribute | Default AD/Azure AD attribute mapping | Processing Remark | +| \# | SuccessFactors Entity | SuccessFactors Attribute | Default attribute mapping | Processing Remark | |-|-|--|--|-| | 1 | PerPerson | personIdExternal | employeeId | Used as matching attribute | | 2 | PerPerson | perPersonUuid | \[Not mapped \- used as source anchor\] | During initial sync, the Provisioning Service links the personUuid to existing objectGuid\. | |
active-directory | Sap Successfactors Integration Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/sap-successfactors-integration-reference.md | Use the steps to update your mapping to retrieve these codes. | Provisioning Job | Account status attribute | Mapping expression | | - | | | | SuccessFactors to Active Directory User Provisioning | `accountDisabled` | `Switch([emplStatus], "True", "A", "False", "U", "False", "P", "False")` |- | SuccessFactors to Microsoft Entra User Provisioning | `accountEnabled` | `Switch([emplStatus], "False", "A", "True", "U", "True", "P", "True")` | + | SuccessFactors to Microsoft Entra user provisioning | `accountEnabled` | `Switch([emplStatus], "False", "A", "True", "U", "True", "P", "True")` | 1. Save the changes. 1. Test the configuration using [provision on demand](provision-on-demand.md). This section describes how you can update the JSONPath settings to definitely re | Provisioning Job | Account status attribute | Expression to use if account status is based on "activeEmploymentsCount" | Expression to use if account status is based on "emplStatus" value | | -- | | -- | - | | SuccessFactors to Active Directory User Provisioning | `accountDisabled` | `Switch([activeEmploymentsCount], "False", "0", "True")` | `Switch([emplStatus], "True", "A", "False", "U", "False", "P", "False")` |- | SuccessFactors to Microsoft Entra User Provisioning | `accountEnabled` | `Switch([activeEmploymentsCount], "True", "0", "False")` | `Switch([emplStatus], "False", "A", "True", "U", "True", "P", "True")` | + | SuccessFactors to Microsoft Entra user provisioning | `accountEnabled` | `Switch([activeEmploymentsCount], "True", "0", "False")` | `Switch([emplStatus], "False", "A", "True", "U", "True", "P", "True")` | 1. Save your changes. 1. 1. Test the configuration using [provision on demand](provision-on-demand.md). |
active-directory | Use Scim To Provision Users And Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md | There are several endpoints defined in the SCIM RFC. You can start with the `/Us The Microsoft Entra provisioning service is designed to support a SCIM 2.0 user management API. > [!IMPORTANT]-> The behavior of the Microsoft Entra SCIM implementation was last updated on December 18, 2018. For information on what changed, see [SCIM 2.0 protocol compliance of the Microsoft Entra User Provisioning service](application-provisioning-config-problem-scim-compatibility.md). +> The behavior of the Microsoft Entra SCIM implementation was last updated on December 18, 2018. For information on what changed, see [SCIM 2.0 protocol compliance of the Microsoft Entra user provisioning service](application-provisioning-config-problem-scim-compatibility.md). Within the SCIM 2.0 protocol specification, your application must support these requirements: Use the general guidelines when implementing a SCIM endpoint to ensure compatibi * `id` is a required property for all resources. Every response that returns a resource should ensure each resource has this property, except for `ListResponse` with zero elements. * Values sent should be stored in the same format they were sent. Invalid values should be rejected with a descriptive, actionable error message. Transformations of data shouldn't happen between data from Microsoft Entra ID and data stored in the SCIM application. (for example. A phone number sent as 55555555555 shouldn't be saved/returned as +5 (555) 555-5555) * It isn't necessary to include the entire resource in the **PATCH** response.-* Don't require a case-sensitive match on structural elements in SCIM, in particular **PATCH** `op` operation values, as defined in [section 3.5.2](https://tools.ietf.org/html/rfc7644#section-3.5.2). Azure AD emits the values of `op` as **Add**, **Replace**, and **Remove**. -* Microsoft Azure AD makes requests to fetch a random user and group to ensure that the endpoint and the credentials are valid. It's also done as a part of the **Test Connection** flow. +* Don't require a case-sensitive match on structural elements in SCIM, in particular **PATCH** `op` operation values, as defined in [section 3.5.2](https://tools.ietf.org/html/rfc7644#section-3.5.2). Microsoft Entra ID emits the values of `op` as **Add**, **Replace**, and **Remove**. +* Microsoft Entra ID makes requests to fetch a random user and group to ensure that the endpoint and the credentials are valid. It's also done as a part of the **Test Connection** flow. * Support HTTPS on your SCIM endpoint. * Custom complex and multivalued attributes are supported but Microsoft Entra ID doesn't have many complex data structures to pull data from in these cases. Name/value attributes can be mapped to easily, but flowing data to complex attributes with three or more subattributes isn't supported. * The "type" subattribute values of multivalued complex attributes must be unique. For example, there can't be two different email addresses with the "work" subtype. Use the general guidelines when implementing a SCIM endpoint to ensure compatibi ### Retrieving Resources: * Response to a query/filter request should always be a `ListResponse`.-* Microsoft Azure AD only uses the following operators: `eq`, `and` +* Microsoft Entra-only uses the following operators: `eq`, `and` * The attribute that the resources can be queried on should be set as a matching attribute on the application, see [Customizing User Provisioning Attribute Mappings](customize-application-attributes.md). ### /Users: |
active-directory | User Provisioning Sync Attributes For Mapping | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/user-provisioning-sync-attributes-for-mapping.md | Get-AzureADUser -ObjectId 0ccf8df6-62f1-4175-9e55-73da9e742690 | Select -ExpandP ``` ## Create an extension attribute using cloud sync-Cloud sync will automatically discover your extensions in on-premises Active Directory when you go to add a new mapping. Use the steps below to autodiscover these attributes and set up a corresponding mapping to Azure AD. +Cloud sync will automatically discover your extensions in on-premises Active Directory when you go to add a new mapping. Use the steps below to autodiscover these attributes and set up a corresponding mapping to Microsoft Entra ID. 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator).-1. Browse to **Identity** > **Hybrid management** > **Azure AD Connect** > **Cloud Sync**. +1. Browse to **Identity** > **Hybrid management** > **Microsoft Entra Connect** > **Cloud Sync**. 1. Select the configuration you wish to add the extension attribute and mapping. 1. Under **Manage attributes** select **click to edit mappings**. 1. Select **Add attribute mapping**. The attributes will automatically be discovered. |
active-directory | Workday Integration Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/workday-integration-reference.md | To retrieve these data sets: 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Application Administrator](../roles/permissions-reference.md#application-administrator). 1. Browse to **Identity** > **Applications** > **Enterprise applications**.-1. Select your Workday to AD/Azure AD user provisioning application. +1. Select your Workday to Active Directory / Microsoft Entra user provisioning application. 1. Select **Provisioning**. 1. Edit the mappings and open the Workday attribute list from the advanced section. -1. Add the following attributes definitions and mark them as "Required". These attributes aren't mapped to any attribute in AD or Azure AD. They serve as signals to the connector to retrieve the Cost Center, Cost Center Hierarchy and Pay Group information. +1. Add the following attributes definitions and mark them as "Required". These attributes aren't mapped to any attribute in Active Directory or Microsoft Entra ID. They serve as signals to the connector to retrieve the Cost Center, Cost Center Hierarchy and Pay Group information. > [!div class="mx-tdCol2BreakAll"] >| Attribute Name | XPATH API expression | |
active-directory | Workday Retrieve Pronoun Information | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/workday-retrieve-pronoun-information.md | Once you confirm that pronoun data is available in the *Get_Workers* response, g <a name='updating-azure-ad-provisioning-app-to-retrieve-pronouns'></a> -To retrieve pronouns from Workday, update your Azure AD provisioning app to query Workday using v38.1 of the Workday Web Services. We recommend testing this configuration first in your test/sandbox environment before implementing the change in production. +To retrieve pronouns from Workday, update your Microsoft Entra provisioning app to query Workday using v38.1 of the Workday Web Services. We recommend testing this configuration first in your test/sandbox environment before implementing the change in production. 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Application Administrator](../roles/permissions-reference.md#application-administrator). 1. Browse to **Identity** > **Applications** > **Enterprise applications**.-1. Select your Workday to AD/Azure AD user provisioning application and go to **Provisioning** . +1. Select your Workday to Active Directory / Microsoft Entra user provisioning application and go to **Provisioning** . 1. In the **Admin Credentials** section, update the **Tenant URL** to include the Workday Web Service version v38.1 as shown. >[!div class="mx-imgBorder"] |
active-directory | App Proxy Protect Ndes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/app-proxy-protect-ndes.md | Microsoft Entra application proxy is built on Azure. It gives you a massive amou ![The new Microsoft Entra application proxy connector shown as active in the Microsoft Entra admin center](./media/app-proxy-protect-ndes/connected-app-proxy.png) > [!NOTE]- > To provide high availability for applications authenticating through the Microsoft Entra application proxy, you can install connectors on multiple VMs. Repeat the same steps listed in the previous section to install the connector on other servers joined to the Microsoft Entra DS managed domain. + > To provide high availability for applications authenticating through the Microsoft Entra application proxy, you can install connectors on multiple VMs. Repeat the same steps listed in the previous section to install the connector on other servers joined to the Microsoft Entra Domain Services managed domain. 1. After successful installation, go back to the Microsoft Entra admin center. |
active-directory | Powershell Get All App Proxy Apps With Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-with-policy.md | This PowerShell script example lists all the Microsoft Entra application proxy a [!INCLUDE [cloud-shell-try-it.md](../../../../includes/cloud-shell-try-it.md)] -This sample requires the [Microsoft Entra V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview). +This sample requires the [Azure Active Directory PowerShell 2.0 for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview). ## Sample script |
active-directory | 4 Secure Access Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/4-secure-access-groups.md | Determine who is granted permissions to create groups: Administrators, employees * Internal and external users can join groups in your tenant * Users can create Microsoft 365 Groups * [Manage who can create Microsoft 365 Groups](/microsoft-365/solutions/manage-creation-of-groups?view=o365-worldwide&preserve-view=true) - * Use Windows PowerShell to configure this setting + * Use PowerShell to configure this setting * [Restrict your Microsoft Entra app to a set of users in a Microsoft Entra tenant](../develop/howto-restrict-your-app-to-a-set-of-users.md) * [Set up self-service group management in Microsoft Entra ID](../enterprise-users/groups-self-service-management.md) * [Troubleshoot and resolve groups issues](../enterprise-users/groups-troubleshooting.md) |
active-directory | 9 Secure Access Teams Sharepoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/9-secure-access-teams-sharepoint.md | Sharing in Microsoft 365 is partially governed by the **External Identities, Ext Learn more: * [Microsoft Entra admin center](https://entra.microsoft.com)-* [External Identities in Azure AD](../external-identities/external-identities-overview.md) +* [External Identities in Microsoft Entra ID](../external-identities/external-identities-overview.md) ### Guest user access |
active-directory | Monitor Sign In Health For Resilience | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/monitor-sign-in-health-for-resilience.md | During an impacting event, two things may happen: - A Microsoft Entra tenant. - A user with global administrator or security administrator role for the Microsoft Entra tenant. - A Log Analytics workspace in your Azure subscription to send logs to Azure Monitor logs. Learn how to [create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md).-- Microsoft Entra ID logs integrated with Azure Monitor logs. Learn how to [Integrate Microsoft Entra Sign- in Logs with Azure Monitor Stream.](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)+- Microsoft Entra ID logs integrated with Azure Monitor logs. Learn how to [Integrate Microsoft Entra sign-in logs with Azure Monitor Stream.](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md) ## Configure the App sign-in health workbook |
active-directory | Ops Guide Auth | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/ops-guide-auth.md | If available, use a security information and event management (SIEM) solution t <a name='azure-ad-logs-archived-and-integrated-with-incident-response-plans'></a> -### Microsoft Entra ID logs archived and integrated with incident response plans +### Microsoft Entra logs archived and integrated with incident response plans -Having access to sign-in activity, audits and risk events for Microsoft Entra ID is crucial for troubleshooting, usage analytics, and forensics investigations. Microsoft Entra ID provides access to these sources through REST APIs that have a limited retention period. A security information and event management (SIEM) system, or equivalent archival technology, is key for long-term storage of audits and supportability. To enable long-term storage of Microsoft Entra ID Logs, you must either add them to your existing SIEM solution or use [Azure Monitor](../reports-monitoring/concept-activity-logs-azure-monitor.md). Archive logs that can be used as part of your incident response plans and investigations. +Having access to sign-in activity, audits and risk events for Microsoft Entra ID is crucial for troubleshooting, usage analytics, and forensics investigations. Microsoft Entra ID provides access to these sources through REST APIs that have a limited retention period. A security information and event management (SIEM) system, or equivalent archival technology, is key for long-term storage of audits and supportability. To enable long-term storage of Microsoft Entra logs, you must either add them to your existing SIEM solution or use [Azure Monitor](../reports-monitoring/concept-activity-logs-azure-monitor.md). Archive logs that can be used as part of your incident response plans and investigations. #### Logs recommended reading - [Microsoft Entra ID audit API reference](/graph/api/resources/directoryaudit) - [Microsoft Entra sign-in activity report API reference](/graph/api/resources/signin)-- [Get data using the Microsoft Entra ID Reporting API with certificates](../reports-monitoring/howto-configure-prerequisites-for-reporting-api.md)+- [Get data using the Microsoft Entra reporting API with certificates](../reports-monitoring/howto-configure-prerequisites-for-reporting-api.md) - [Microsoft Graph for Microsoft Entra ID Protection](../identity-protection/howto-identity-protection-graph-api.md) - [Office 365 Management Activity API reference](/office/office-365-management-api/office-365-management-activity-api-reference) - [How to use the Microsoft Entra ID Power BI Content Pack](../reports-monitoring/howto-use-azure-monitor-workbooks.md) There are 12 aspects to a secure Identity infrastructure. This list will help yo - Lock down legacy authentication protocols. - Detect and remediate illicit consent grants. - Lock down user and group settings.-- Enable long-term storage of Microsoft Entra ID logs for troubleshooting, usage analytics, and forensics investigations.+- Enable long-term storage of Microsoft Entra logs for troubleshooting, usage analytics, and forensics investigations. ## Next steps |
active-directory | Protect M365 From On Premises Attacks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/protect-m365-from-on-premises-attacks.md | Deploy Microsoft Entra joined Windows 10 workstations with mobile device managem - **Application and workload servers** - Applications or resources that required servers can be migrated to Azure infrastructure as a service (IaaS). Use Microsoft Entra Domain Services (Microsoft Entra DS) to decouple trust and dependency on on-premises instances of Active Directory. To achieve this decoupling, make sure virtual networks used for Microsoft Entra DS don't have a connection to corporate networks. See [Microsoft Entra Domain Services](../../active-directory-domain-services/overview.md). + Applications or resources that required servers can be migrated to Azure infrastructure as a service (IaaS). Use Microsoft Entra Domain Services to decouple trust and dependency on on-premises instances of Active Directory. To achieve this decoupling, make sure virtual networks used for Microsoft Entra Domain Services don't have a connection to corporate networks. See [Microsoft Entra Domain Services](../../active-directory-domain-services/overview.md). Use credential tiering. Application servers are typically considered tier-1 assets. For more information, see [Enterprise access model](/security/compass/privileged-access-access-model#ADATM_BM). Use Microsoft Entra Conditional Access to interpret signals and use them to make ## Monitor -After you configure your environment to protect your Microsoft 365 from an on-premises compromise, proactively monitor the environment. For more information, see [What is Microsoft Entra ID monitoring](../reports-monitoring/overview-monitoring.md). +After you configure your environment to protect your Microsoft 365 from an on-premises compromise, proactively monitor the environment. For more information, see [What is Microsoft Entra monitoring?](../reports-monitoring/overview-monitoring-health.md) ### Scenarios to monitor Monitor the following key scenarios, in addition to any scenarios specific to yo Define a log storage and retention strategy, design, and implementation to facilitate a consistent tool set. For example, you could consider security information and event management (SIEM) systems like Microsoft Sentinel, common queries, and investigation and forensics playbooks. -- **Microsoft Entra ID logs**. Ingest generated logs and signals by consistently following best practices for settings such as diagnostics, log retention, and SIEM ingestion.+- **Microsoft Entra logs**. Ingest generated logs and signals by consistently following best practices for settings such as diagnostics, log retention, and SIEM ingestion. - The log strategy must include the following Microsoft Entra ID logs: + The log strategy must include the following Microsoft Entra logs: - Sign-in activity - Audit logs Define a log storage and retention strategy, design, and implementation to facil Use the Microsoft Graph API to ingest risk events. See [Use the Microsoft Graph identity protection APIs](/graph/api/resources/identityprotection-root). - You can stream Microsoft Entra ID logs to Azure Monitor logs. See [Integrate Microsoft Entra ID logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md). + You can stream Microsoft Entra logs to Azure Monitor logs. See [Integrate Microsoft Entra logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md). - **Hybrid infrastructure operating system security logs**. All hybrid identity infrastructure operating system logs should be archived and carefully monitored as a tier-0 system, because of the surface-area implications. Include the following elements: |
active-directory | Resilience Client App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/resilience-client-app.md | We recommend developers build a process to use the latest MSAL release because a Find the latest version and release notes: -* [microsoft-authentication-library-for--js](https://github.com/AzureAD/microsoft-authentication-library-for-js/releases) -* [microsoft-authentication-library-for--dotnet](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/releases) -* [microsoft-authentication-library-for--python](https://github.com/AzureAD/microsoft-authentication-library-for-python/releases) -* [microsoft-authentication-library-for--java](https://github.com/AzureAD/microsoft-authentication-library-for-java/releases) -* [microsoft-authentication-library-for--objc](https://github.com/AzureAD/microsoft-authentication-library-for-objc/releases) -* [microsoft-authentication-library-for--android](https://github.com/AzureAD/microsoft-authentication-library-for-android/releases) -* [microsoft-authentication-library-for--js](https://github.com/AzureAD/microsoft-authentication-library-for-js/releases) -* [microsoft-identity-web](https://github.com/AzureAD/microsoft-identity-web/releases) +* [`microsoft-authentication-library-for-js`](https://github.com/AzureAD/microsoft-authentication-library-for-js/releases) +* [`microsoft-authentication-library-for-dotnet`](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/releases) +* [`microsoft-authentication-library-for-python`](https://github.com/AzureAD/microsoft-authentication-library-for-python/releases) +* [`microsoft-authentication-library-for-java`](https://github.com/AzureAD/microsoft-authentication-library-for-java/releases) +* [`microsoft-authentication-library-for-objc`](https://github.com/AzureAD/microsoft-authentication-library-for-objc/releases) +* [`microsoft-authentication-library-for-android`](https://github.com/AzureAD/microsoft-authentication-library-for-android/releases) +* [`microsoft-authentication-library-for-js`](https://github.com/AzureAD/microsoft-authentication-library-for-js/releases) +* [`microsoft-identity-web`](https://github.com/AzureAD/microsoft-identity-web/releases) ## Resilient patterns for token handling Learn more: * [Conditional Access policy evaluation](../conditional-access/concept-continuous-access-evaluation.md#conditional-access-policy-evaluation) * [How to use CAE enabled APIs in your applications](../develop/app-resilience-continuous-access-evaluation.md) -If you develop resource APIs, go to openid.net for [Shared Signals ΓÇô A Secure Webhooks Framework](https://openid.net/wg/sse/). +If you develop resource APIs, go to `openid.net` for [Shared Signals ΓÇô A Secure Webhooks Framework](https://openid.net/wg/sse/). ## Next steps |
active-directory | Secure Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/secure-best-practices.md | Detailed information on using automated or manual processes and tools to monitor Some environments might have regulatory requirements that limit which data (if any) can leave a given environment. If centralized monitoring across environments isn't possible, teams should have operational procedures to correlate activities of identities across environments for auditing and forensics purposes such as cross-environment lateral movement attempts. It's recommended that the object unique identifiers human identities belonging to the same person is discoverable, potentially as part of the identity provisioning systems. -The log strategy must include the following Microsoft Entra ID logs for each tenant used in the organization: +The log strategy must include the following Microsoft Entra logs for each tenant used in the organization: * Sign-in activity |
active-directory | Secure Resource Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/secure-resource-management.md | Internally, managed identities are service principals of a special type, to only ## Microsoft Entra Domain Services -Microsoft Entra Domain Services (Microsoft Entra DS) provides a managed domain to facilitate authentication for Azure workloads using legacy protocols. Supported servers are moved from an on-premises AD DS forest and joined to a Microsoft Entra DS managed domain and continue to use legacy protocols for authentication (for example, Kerberos authentication). +Microsoft Entra Domain Services provides a managed domain to facilitate authentication for Azure workloads using legacy protocols. Supported servers are moved from an on-premises AD DS forest and joined to a Microsoft Entra Domain Services managed domain and continue to use legacy protocols for authentication (for example, Kerberos authentication). ## Azure AD B2C directories and Azure There are three key options regarding isolation management of IaaS workloads: * Virtual machines joined to stand-alone Active Directory Domain Services (AD DS) -* Microsoft Entra Domain Services (Microsoft Entra DS) joined virtual machines +* Microsoft Entra Domain Services joined virtual machines * Sign-in to virtual machines in Azure using Microsoft Entra authentication A key concept to address with the first two options is that there are two identity realms that are involved in these scenarios. -* When you sign in to an Azure Windows Server VM via remote desktop protocol (RDP), you're generally logging on to the server using your domain credentials, which performs a Kerberos authentication against an on-premises AD DS domain controller or Microsoft Entra DS. Alternatively, if the server isn't domain-joined then a local account can be used to sign in to the virtual machines. +* When you sign in to an Azure Windows Server VM via remote desktop protocol (RDP), you're generally logging on to the server using your domain credentials, which performs a Kerberos authentication against an on-premises AD DS domain controller or Microsoft Entra Domain Services. Alternatively, if the server isn't domain-joined then a local account can be used to sign in to the virtual machines. * When you sign in to the Azure portal to create or manage a VM, you're authenticating against Microsoft Entra ID (potentially using the same credentials if you've synchronized the correct accounts), and this could result in an authentication against your domain controllers should you be using Active Directory Federation Services (AD FS) or PassThrough Authentication. AD DS domain controllers: a minimum of two AD DS domain controllers must be depl ### Microsoft Entra Domain Services joined virtual machines -When a requirement exists to deploy IaaS workloads to Azure that require identity isolation from AD DS administrators and users in another forest, then a Microsoft Entra Domain Services (Microsoft Entra DS) managed domain can be deployed. Microsoft Entra DS is a service that provides a managed domain to facilitate authentication for Azure workloads using legacy protocols. This provides an isolated domain without the technical complexities of building and managing your own AD DS. The following considerations need to be made. +When a requirement exists to deploy IaaS workloads to Azure that require identity isolation from AD DS administrators and users in another forest, then a Microsoft Entra Domain Services managed domain can be deployed. Microsoft Entra Domain Services is a service that provides a managed domain to facilitate authentication for Azure workloads using legacy protocols. This provides an isolated domain without the technical complexities of building and managing your own AD DS. The following considerations need to be made. -![Diagram that shows Microsoft Entra DS virtual machine management.](media/secure-resource-management/vm-to-domain-services.png) +![Diagram that shows Microsoft Entra Domain Services virtual machine management.](media/secure-resource-management/vm-to-domain-services.png) -**Microsoft Entra DS managed domain** - Only one Microsoft Entra DS managed domain can be deployed per Microsoft Entra tenant and this is bound to a single VNet. It's recommended that this VNet forms the "hub" for Microsoft Entra DS authentication. From this hub, "spokes" can be created and linked to allow legacy authentication for servers and applications. The spokes are additional VNets on which Microsoft Entra DS joined servers are located and are linked to the hub using Azure network gateways or VNet peering. +**Microsoft Entra Domain Services managed domain** - Only one Microsoft Entra Domain Services managed domain can be deployed per Microsoft Entra tenant and this is bound to a single VNet. It's recommended that this VNet forms the "hub" for Microsoft Entra Domain Services authentication. From this hub, "spokes" can be created and linked to allow legacy authentication for servers and applications. The spokes are additional VNets on which Microsoft Entra Domain Services joined servers are located and are linked to the hub using Azure network gateways or VNet peering. -**Managed domain location** - A location must be set when deploying a Microsoft Entra DS managed domain. The location is a physical region (data center) where the managed domain is deployed. It's recommended you: +**Managed domain location** - A location must be set when deploying a Microsoft Entra Domain Services managed domain. The location is a physical region (data center) where the managed domain is deployed. It's recommended you: -* Consider a location that is geographically closed to the servers and applications that require Microsoft Entra DS services. +* Consider a location that is geographically closed to the servers and applications that require Microsoft Entra Domain Services services. * Consider regions that provide Availability Zones capabilities for high availability requirements. For more information, see [Regions and Availability Zones in Azure](../../reliability/availability-zones-service-support.md). -**Object provisioning** - Microsoft Entra DS synchronizes identities from the Microsoft Entra ID that is associated with the subscription that Microsoft Entra DS is deployed into. It's also worth noting that if the associated Microsoft Entra ID has synchronization set up with Microsoft Entra Connect (user forest scenario) then the life cycle of these identities can also be reflected in Microsoft Entra DS. This service has two modes that can be used for provisioning user and group objects from Microsoft Entra ID. +**Object provisioning** - Microsoft Entra Domain Services synchronizes identities from the Microsoft Entra ID that is associated with the subscription that Microsoft Entra Domain Services is deployed into. It's also worth noting that if the associated Microsoft Entra ID has synchronization set up with Microsoft Entra Connect (user forest scenario) then the life cycle of these identities can also be reflected in Microsoft Entra Domain Services. This service has two modes that can be used for provisioning user and group objects from Microsoft Entra ID. -* **All**: All users and groups are synchronized from Microsoft Entra ID into Microsoft Entra DS. +* **All**: All users and groups are synchronized from Microsoft Entra ID into Microsoft Entra Domain Services. -* **Scoped**: Only users in scope of a group(s) are synchronized from Microsoft Entra ID into Microsoft Entra DS. +* **Scoped**: Only users in scope of a group(s) are synchronized from Microsoft Entra ID into Microsoft Entra Domain Services. -When you first deploy Microsoft Entra DS, an automatic one-way synchronization is configured to replicate the objects from Microsoft Entra ID. This one-way synchronization continues to run in the background to keep the Microsoft Entra DS managed domain up to date with any changes from Microsoft Entra ID. No synchronization occurs from Microsoft Entra DS back to Microsoft Entra ID. For more information, see [How objects and credentials are synchronized in a Microsoft Entra Domain Services managed domain](../../active-directory-domain-services/synchronization.md). +When you first deploy Microsoft Entra Domain Services, an automatic one-way synchronization is configured to replicate the objects from Microsoft Entra ID. This one-way synchronization continues to run in the background to keep the Microsoft Entra Domain Services managed domain up to date with any changes from Microsoft Entra ID. No synchronization occurs from Microsoft Entra Domain Services back to Microsoft Entra ID. For more information, see [How objects and credentials are synchronized in a Microsoft Entra Domain Services managed domain](../../active-directory-domain-services/synchronization.md). -It's worth noting that if you need to change the type of synchronization from All to Scoped (or vice versa), then the Microsoft Entra DS managed domain will need to be deleted, recreated and configured. In addition, organizations should consider the use of "scoped" provisioning to reduce the identities to only those that need access to Microsoft Entra DS resources as a good practice. +It's worth noting that if you need to change the type of synchronization from All to Scoped (or vice versa), then the Microsoft Entra Domain Services managed domain will need to be deleted, recreated and configured. In addition, organizations should consider the use of "scoped" provisioning to reduce the identities to only those that need access to Microsoft Entra Domain Services resources as a good practice. -**Group Policy Objects (GPO)** - To configure GPO in a Microsoft Entra DS managed domain you must use Group Policy Management tools on a server that has been domain joined to the Microsoft Entra DS managed domain. For more information, see [Administer Group Policy in a Microsoft Entra Domain Services managed domain](../../active-directory-domain-services/manage-group-policy.md). +**Group Policy Objects (GPO)** - To configure GPO in a Microsoft Entra Domain Services managed domain you must use Group Policy Management tools on a server that has been domain joined to the Microsoft Entra Domain Services managed domain. For more information, see [Administer Group Policy in a Microsoft Entra Domain Services managed domain](../../active-directory-domain-services/manage-group-policy.md). -**Secure LDAP** - Microsoft Entra DS provides a secure LDAP service that can be used by applications that require it. This setting is disabled by default and to enable secure LDAP a certificate needs to be uploaded, in addition, the NSG that secures the VNet that Microsoft Entra DS is deployed on to must allow port 636 connectivity to the Microsoft Entra DS managed domains. For more information, see [Configure secure LDAP for a Microsoft Entra Domain Services managed domain](../../active-directory-domain-services/tutorial-configure-ldaps.md). +**Secure LDAP** - Microsoft Entra Domain Services provides a secure LDAP service that can be used by applications that require it. This setting is disabled by default and to enable secure LDAP a certificate needs to be uploaded, in addition, the NSG that secures the VNet that Microsoft Entra Domain Services is deployed on to must allow port 636 connectivity to the Microsoft Entra Domain Services managed domains. For more information, see [Configure secure LDAP for a Microsoft Entra Domain Services managed domain](../../active-directory-domain-services/tutorial-configure-ldaps.md). -**Administration** - To perform administration duties on Microsoft Entra DS (for example, domain join machines or edit GPO), the account used for this task needs to be part of the Microsoft Entra DC Administrators group. Accounts that are members of this group can't directly sign-in to domain controllers to perform management tasks. Instead, you create a management VM that is joined to the Microsoft Entra DS managed domain, then install your regular AD DS management tools. For more information, see [Management concepts for user accounts, passwords, and administration in Microsoft Entra Domain Services](../../active-directory-domain-services/administration-concepts.md). +**Administration** - To perform administration duties on Microsoft Entra Domain Services (for example, domain join machines or edit GPO), the account used for this task needs to be part of the Microsoft Entra DC Administrators group. Accounts that are members of this group can't directly sign-in to domain controllers to perform management tasks. Instead, you create a management VM that is joined to the Microsoft Entra Domain Services managed domain, then install your regular AD DS management tools. For more information, see [Management concepts for user accounts, passwords, and administration in Microsoft Entra Domain Services](../../active-directory-domain-services/administration-concepts.md). -**Password hashes** - For authentication with Microsoft Entra DS to work, password hashes for all users need to be in a format that is suitable for NT LAN Manager (NTLM) and Kerberos authentication. To ensure authentication with Microsoft Entra DS works as expected, the following prerequisites need to be performed. +**Password hashes** - For authentication with Microsoft Entra Domain Services to work, password hashes for all users need to be in a format that is suitable for NT LAN Manager (NTLM) and Kerberos authentication. To ensure authentication with Microsoft Entra Domain Services works as expected, the following prerequisites need to be performed. * **Users synchronized with Microsoft Entra Connect (from AD DS)** - The legacy password hashes need to be synchronized from on-premises AD DS to Microsoft Entra ID. -* **Users created in Microsoft Entra ID** - Need to reset their password for the correct hashes to be generated for usage with Microsoft Entra DS. For more information, see [Enable synchronization of password hashes](../../active-directory-domain-services/tutorial-configure-password-hash-sync.md). +* **Users created in Microsoft Entra ID** - Need to reset their password for the correct hashes to be generated for usage with Microsoft Entra Domain Services. For more information, see [Enable synchronization of password hashes](../../active-directory-domain-services/tutorial-configure-password-hash-sync.md). -**Network** - Microsoft Entra DS is deployed on to an Azure VNet so considerations need to be made to ensure that servers and applications are secured and can access the managed domain correctly. For more information, see [Virtual network design considerations and configuration options for Microsoft Entra Domain Services](../../active-directory-domain-services/network-considerations.md). +**Network** - Microsoft Entra Domain Services is deployed on to an Azure VNet so considerations need to be made to ensure that servers and applications are secured and can access the managed domain correctly. For more information, see [Virtual network design considerations and configuration options for Microsoft Entra Domain Services](../../active-directory-domain-services/network-considerations.md). -* Microsoft Entra DS must be deployed in its own subnet: Don't use an existing subnet or a gateway subnet. +* Microsoft Entra Domain Services must be deployed in its own subnet: Don't use an existing subnet or a gateway subnet. -* **A network security group (NSG)** - is created during the deployment of a Microsoft Entra DS managed domain. This network security group contains the required rules for correct service communication. Don't create or use an existing network security group with your own custom rules. +* **A network security group (NSG)** - is created during the deployment of a Microsoft Entra Domain Services managed domain. This network security group contains the required rules for correct service communication. Don't create or use an existing network security group with your own custom rules. -* **Microsoft Entra DS requires 3-5 IP addresses** - Make sure that your subnet IP address range can provide this number of addresses. Restricting the available IP addresses can prevent Microsoft Entra DS from maintaining two domain controllers. +* **Microsoft Entra Domain Services requires 3-5 IP addresses** - Make sure that your subnet IP address range can provide this number of addresses. Restricting the available IP addresses can prevent Microsoft Entra Domain Services from maintaining two domain controllers. -* **VNet DNS Server** - As previously discussed about the "hub and spoke" model, it's important to have DNS configured correctly on the VNets to ensure that servers joined to the Microsoft Entra DS managed domain have the correct DNS settings to resolve the Microsoft Entra DS managed domain. Each VNet has a DNS server entry that is passed to servers as they obtain an IP address and these DNS entries need to be the IP addresses of the Microsoft Entra DS managed domain. For more information, see [Update DNS settings for the Azure virtual network](../../active-directory-domain-services/tutorial-create-instance.md). +* **VNet DNS Server** - As previously discussed about the "hub and spoke" model, it's important to have DNS configured correctly on the VNets to ensure that servers joined to the Microsoft Entra Domain Services managed domain have the correct DNS settings to resolve the Microsoft Entra Domain Services managed domain. Each VNet has a DNS server entry that is passed to servers as they obtain an IP address and these DNS entries need to be the IP addresses of the Microsoft Entra Domain Services managed domain. For more information, see [Update DNS settings for the Azure virtual network](../../active-directory-domain-services/tutorial-create-instance.md). **Challenges** - The following list highlights key challenges with using this option for Identity Isolation. -* Some Microsoft Entra DS configuration can only be administered from a Microsoft Entra DS joined server. +* Some Microsoft Entra Domain Services configuration can only be administered from a Microsoft Entra Domain Services joined server. -* Only one Microsoft Entra DS managed domain can be deployed per Microsoft Entra tenant. As we describe in this section the hub and spoke model is recommended to provide Microsoft Entra DS authentication to services on other VNets. +* Only one Microsoft Entra Domain Services managed domain can be deployed per Microsoft Entra tenant. As we describe in this section the hub and spoke model is recommended to provide Microsoft Entra Domain Services authentication to services on other VNets. * Further infrastructure maybe required for management of patching and software deployments. Organizations should consider deploying Azure Update Management, Group Policy (GPO) or System Center Configuration Manager (SCCM) to manage these servers. -For this isolated model, it's assumed that there's no connectivity to the VNet that hosts the Microsoft Entra DS managed domain from the customer's corporate network and that there are no trusts configured with other forests. A jumpbox or management server should be created to allow a point from which the Microsoft Entra DS can be managed and administered. +For this isolated model, it's assumed that there's no connectivity to the VNet that hosts the Microsoft Entra Domain Services managed domain from the customer's corporate network and that there are no trusts configured with other forests. A jumpbox or management server should be created to allow a point from which the Microsoft Entra Domain Services can be managed and administered. <a name='sign-into-virtual-machines-in-azure-using-azure-active-directory-authentication'></a> |
active-directory | Security Operations Consumer Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/security-operations-consumer-accounts.md | Use log files to investigate and monitor. See the following articles for more: ### Audit logs and automation tools -From the Azure portal, you can view Microsoft Entra audit logs and download as comma separated value (CSV) or JavaScript Object Notation (JSON) files. Use the Azure portal to integrate Microsoft Entra ID logs with other tools to automate monitoring and alerting: +From the Azure portal, you can view Microsoft Entra audit logs and download as comma separated value (CSV) or JavaScript Object Notation (JSON) files. Use the Azure portal to integrate Microsoft Entra logs with other tools to automate monitoring and alerting: * **Microsoft Sentinel** ΓÇô security analytics with security information and event management (SIEM) capabilities * [What is Microsoft Sentinel?](../../sentinel/overview.md) From the Azure portal, you can view Microsoft Entra audit logs and download as c * [SigmaHR/sigma](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) * **Azure Monitor** ΓÇô automated monitoring and alerting of various conditions. Create or use workbooks to combine data from different sources. * [Azure Monitor overview](../../azure-monitor/overview.md)-* **Azure Event Hubs integrated with a SIEM** - integrate Microsoft Entra ID logs with SIEMs such as Splunk, ArcSight, QRadar and Sumo Logic with Azure Event Hubs +* **Azure Event Hubs integrated with a SIEM** - integrate Microsoft Entra logs with SIEMs such as Splunk, ArcSight, QRadar and Sumo Logic with Azure Event Hubs * [Azure Event Hubs-A big data streaming platform and event ingestion service](../../event-hubs/event-hubs-about.md)- * [Tutorial: Stream Microsoft Entra ID logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) + * [Tutorial: Stream Microsoft Entra logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) * **Microsoft Defender for Cloud Apps** ΓÇô discover and manage apps, govern across apps and resources, and conform cloud app compliance * [Microsoft Defender for Cloud Apps overview](/defender-cloud-apps/what-is-defender-for-cloud-apps) * **Identity Protection** - detect risk on workload identities across sign-in behavior and offline indicators of compromise Use the remainder of the article for recommendations on what to monitor and aler | Large number of account creations or deletions | High | Microsoft Entra audit logs | Activity: Add user<br>Status = success<br>Initiated by (actor) = CPIM Service<br>-and-<br>Activity: Delete user<br>Status = success<br>Initiated by (actor) = CPIM Service | Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors. Limit false alerts. | | Accounts created and deleted by non-approved users or processes| Medium | Microsoft Entra audit logs | Initiated by (actor) ΓÇô USER PRINCIPAL NAME<br>-and-<br>Activity: Add user<br>Status = success<br>Initiated by (actor) != CPIM Service<br>and-or<br>Activity: Delete user<br>Status = success<br>Initiated by (actor) != CPIM Service | If the actors are non-approved users, configure to send an alert. | | Accounts assigned to a privileged role| High | Microsoft Entra audit logs | Activity: Add user<br>Status = success<br>Initiated by (actor) == CPIM Service<br>-and-<br>Activity: Add member to role<br>Status = success | If the account is assigned to a Microsoft Entra role, Azure role, or privileged group membership, alert and prioritize the investigation. |-| Failed sign-in attempts| Medium - if Isolated incident<br>High - if many accounts are experiencing the same pattern | Microsoft Entra Sign-ins log | Status = failed<br>-and-<br>Sign-in error code 50126 - Error validating credentials due to invalid username or password.<br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application == "ProxyIdentityExperienceFramework" | Define a baseline threshold, and then monitor and adjust to suit your organizational behaviors and limit false alerts from being generated. | -| Smart lock-out events| Medium - if Isolated incident<br>High - if many accounts are experiencing the same pattern or a VIP | Microsoft Entra Sign-ins log | Status = failed<br>-and-<br>Sign-in error code = 50053 ΓÇô IdsLocked<br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application =="ProxyIdentityExperienceFramework" | Define a baseline threshold, and then monitor and adjust to suit your organizational behaviors and limit false alerts. | -| Failed authentications from countries or regions you don't operate from| Medium | Microsoft Entra Sign-ins log | Status = failed<br>-and-<br>Location = \<unapproved location><br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application == "ProxyIdentityExperienceFramework" | Monitor entries not equal to provided city names. | -| Increased failed authentications of any type | Medium | Microsoft Entra Sign-ins log | Status = failed<br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application == "ProxyIdentityExperienceFramework" | If you don't have a threshold, monitor and alert if failures increase by 10%, or greater. | -| Account disabled/blocked for sign-ins | Low | Microsoft Entra Sign-ins log | Status = Failure<br>-and-<br>error code = 50057, The user account is disabled. | This scenario could indicate someone trying to gain access to an account after they left an organization. The account is blocked, but it's important to log and alert this activity. | -| Measurable increase of successful sign-ins | Low | Microsoft Entra Sign-ins log | Status = Success<br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application == "ProxyIdentityExperienceFramework" | If you don't have a threshold, monitor and alert if successful authentications increase by 10%, or greater. | +| Failed sign-in attempts| Medium - if Isolated incident<br>High - if many accounts are experiencing the same pattern | Microsoft Entra sign-in log | Status = failed<br>-and-<br>Sign-in error code 50126 - Error validating credentials due to invalid username or password.<br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application == "ProxyIdentityExperienceFramework" | Define a baseline threshold, and then monitor and adjust to suit your organizational behaviors and limit false alerts from being generated. | +| Smart lock-out events| Medium - if Isolated incident<br>High - if many accounts are experiencing the same pattern or a VIP | Microsoft Entra sign-in log | Status = failed<br>-and-<br>Sign-in error code = 50053 ΓÇô IdsLocked<br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application =="ProxyIdentityExperienceFramework" | Define a baseline threshold, and then monitor and adjust to suit your organizational behaviors and limit false alerts. | +| Failed authentications from countries or regions you don't operate from| Medium | Microsoft Entra sign-in log | Status = failed<br>-and-<br>Location = \<unapproved location><br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application == "ProxyIdentityExperienceFramework" | Monitor entries not equal to provided city names. | +| Increased failed authentications of any type | Medium | Microsoft Entra sign-in log | Status = failed<br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application == "ProxyIdentityExperienceFramework" | If you don't have a threshold, monitor and alert if failures increase by 10%, or greater. | +| Account disabled/blocked for sign-ins | Low | Microsoft Entra sign-in log | Status = Failure<br>-and-<br>error code = 50057, The user account is disabled. | This scenario could indicate someone trying to gain access to an account after they left an organization. The account is blocked, but it's important to log and alert this activity. | +| Measurable increase of successful sign-ins | Low | Microsoft Entra sign-in log | Status = Success<br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application == "ProxyIdentityExperienceFramework" | If you don't have a threshold, monitor and alert if successful authentications increase by 10%, or greater. | ## Privileged accounts | What to monitor | Risk level | Where | Filter / subfilter | Notes | | - | - | - | - | - |-| Sign-in failure, bad password threshold | High | Microsoft Entra Sign-ins log | Status = Failure<br>-and-<br>error code = 50126 | Define a baseline threshold and monitor and adjust to suit your organizational behaviors. Limit false alerts. | -| Failure because of Conditional Access requirement | High | Microsoft Entra Sign-ins log | Status = Failure<br>-and-<br>error code = 53003<br>-and-<br>Failure reason = Blocked by Conditional Access | The event can indicate an attacker is trying to get into the account. | -| Interrupt | High, medium | Microsoft Entra Sign-ins log | Status = Failure<br>-and-<br>error code = 53003<br>-and-<br>Failure reason = Blocked by Conditional Access | The event can indicate an attacker has the account password, but can't pass the MFA challenge. | -| Account lockout | High | Microsoft Entra Sign-ins log | Status = Failure<br>-and-<br>error code = 50053 | Define a baseline threshold, then monitor and adjust to suit your organizational behaviors. Limit false alerts. | -| Account disabled or blocked for sign-ins | low | Microsoft Entra Sign-ins log | Status = Failure<br>-and-<br>Target = User UPN<br>-and-<br>error code = 50057 | The event could indicate someone trying to gain account access after they've left the organization. Although the account is blocked, log and alert this activity. | -| MFA fraud alert or block | High | Microsoft Entra Sign-ins log/Azure Log Analytics | Sign-ins>Authentication details<br> Result details = MFA denied, fraud code entered | Privileged user indicates they haven't instigated the MFA prompt, which could indicate an attacker has the account password. | -| MFA fraud alert or block | High | Microsoft Entra Sign-ins log/Azure Log Analytics | Activity type = Fraud reported - User is blocked for MFA or fraud reported - No action taken, based on fraud report tenant-level settings | Privileged user indicated no instigation of the MFA prompt. The scenario can indicate an attacker has the account password. | -| Privileged account sign-ins outside of expected controls | High | Microsoft Entra Sign-ins log | Status = Failure<br>UserPricipalName = \<Admin account> <br> Location = \<unapproved location> <br> IP address = \<unapproved IP><br>Device info = \<unapproved Browser, Operating System> | Monitor and alert entries you defined as unapproved. | -| Outside of normal sign-in times | High | Microsoft Entra Sign-ins log | Status = Success<br>-and-<br>Location =<br>-and-<br>Time = Outside of working hours | Monitor and alert if sign-ins occur outside expected times. Find the normal working pattern for each privileged account and alert if there are unplanned changes outside normal working times. Sign-ins outside normal working hours could indicate compromise or possible insider threat. | +| Sign-in failure, bad password threshold | High | Microsoft Entra sign-in log | Status = Failure<br>-and-<br>error code = 50126 | Define a baseline threshold and monitor and adjust to suit your organizational behaviors. Limit false alerts. | +| Failure because of Conditional Access requirement | High | Microsoft Entra sign-in log | Status = Failure<br>-and-<br>error code = 53003<br>-and-<br>Failure reason = Blocked by Conditional Access | The event can indicate an attacker is trying to get into the account. | +| Interrupt | High, medium | Microsoft Entra sign-in log | Status = Failure<br>-and-<br>error code = 53003<br>-and-<br>Failure reason = Blocked by Conditional Access | The event can indicate an attacker has the account password, but can't pass the MFA challenge. | +| Account lockout | High | Microsoft Entra sign-in log | Status = Failure<br>-and-<br>error code = 50053 | Define a baseline threshold, then monitor and adjust to suit your organizational behaviors. Limit false alerts. | +| Account disabled or blocked for sign-ins | low | Microsoft Entra sign-in log | Status = Failure<br>-and-<br>Target = User UPN<br>-and-<br>error code = 50057 | The event could indicate someone trying to gain account access after they've left the organization. Although the account is blocked, log and alert this activity. | +| MFA fraud alert or block | High | Microsoft Entra sign-in log/Azure Log Analytics | Sign-ins>Authentication details<br> Result details = MFA denied, fraud code entered | Privileged user indicates they haven't instigated the MFA prompt, which could indicate an attacker has the account password. | +| MFA fraud alert or block | High | Microsoft Entra sign-in log/Azure Log Analytics | Activity type = Fraud reported - User is blocked for MFA or fraud reported - No action taken, based on fraud report tenant-level settings | Privileged user indicated no instigation of the MFA prompt. The scenario can indicate an attacker has the account password. | +| Privileged account sign-ins outside of expected controls | High | Microsoft Entra sign-in log | Status = Failure<br>UserPricipalName = \<Admin account> <br> Location = \<unapproved location> <br> IP address = \<unapproved IP><br>Device info = \<unapproved Browser, Operating System> | Monitor and alert entries you defined as unapproved. | +| Outside of normal sign-in times | High | Microsoft Entra sign-in log | Status = Success<br>-and-<br>Location =<br>-and-<br>Time = Outside of working hours | Monitor and alert if sign-ins occur outside expected times. Find the normal working pattern for each privileged account and alert if there are unplanned changes outside normal working times. Sign-ins outside normal working hours could indicate compromise or possible insider threat. | | Password change | High | Microsoft Entra audit logs | Activity actor = Admin/self-service<br>-and-<br>Target = User<br>-and-<br>Status = Success or failure | Alert any admin account password changes, especially for global admins, user admins, subscription admins, and emergency access accounts. Write a query for privileged accounts. | | Changes to authentication methods | High | Microsoft Entra audit logs | Activity: Create identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | The change could indicate an attacker adding an auth method to the account to have continued access. | | Identity Provider updated by non-approved actors | High | Microsoft Entra audit logs | Activity: Update identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | The change could indicate an attacker adding an auth method to the account to have continued access. | Identity Provider deleted by non-approved actors | High | Microsoft Entra access | Administrator granting application permissions (app roles), or highly privileged delegated permissions | High | Microsoft 365 portal | ΓÇ£Add app role assignment to service principalΓÇ¥<br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph) ΓÇ£Add delegated permission grantΓÇ¥<br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph)<br>-and-<br>DelegatedPermissionGrant.Scope includes high-privilege permissions. | Alert when a global, application, or cloud application administrator consents to an application. Especially look for consent outside normal activity and change procedures. | | Application is granted permissions for Microsoft Graph, Exchange, SharePoint, or Microsoft Entra ID. | High | Microsoft Entra audit logs | ΓÇ£Add delegated permission grantΓÇ¥<br>-or-<br>ΓÇ£Add app role assignment to service principalΓÇ¥<br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph, Exchange Online, and so on) | Use the alert in the preceding row. | | Highly privileged delegated permissions granted on behalf of all users | High | Microsoft Entra audit logs | ΓÇ£Add delegated permission grantΓÇ¥<br>where<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph)<br>DelegatedPermissionGrant.Scope includes high-privilege permissions<br>-and-<br>DelegatedPermissionGrant.ConsentType is ΓÇ£AllPrincipalsΓÇ¥. | Use the alert in the preceding row. |-| Applications that are using the ROPC authentication flow | Medium | Microsoft Entra Sign-ins log | Status=Success<br>Authentication Protocol-ROPC | High level of trust is placed in this application because the credentials can be cached or stored. If possible, move to a more secure authentication flow. Use the process only in automated application testing, if ever. | -| Dangling URI | High | Microsoft Entra ID Logs and Application Registration | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress | For example, look for dangling URIs pointing to a domain name that is gone, or one you donΓÇÖt own. | -| Redirect URI configuration changes | High | Microsoft Entra ID logs | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress | Look for URIs not using HTTPS*, URIs with wildcards at the end or the domain of the URL, URIs that are **not** unique to the application, URIs that point to a domain you don't control. | -| Changes to AppID URI | High | Microsoft Entra ID logs | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Update Application<br>Activity: Update Service principal | Look for AppID URI modifications, such as adding, modifying, or removing the URI. | -| Changes to application ownership | Medium | Microsoft Entra ID logs | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Add owner to application | Look for instances of users added as application owners outside normal change management activities. | -| Changes to sign out URL | Low | Microsoft Entra ID logs | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Update Application<br>-and-<br>Activity: Update service principle | Look for modifications to a sign out URL. Blank entries or entries to non-existent locations would stop a user from terminating a session. +| Applications that are using the ROPC authentication flow | Medium | Microsoft Entra sign-in log | Status=Success<br>Authentication Protocol-ROPC | High level of trust is placed in this application because the credentials can be cached or stored. If possible, move to a more secure authentication flow. Use the process only in automated application testing, if ever. | +| Dangling URI | High | Microsoft Entra logs and Application Registration | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress | For example, look for dangling URIs pointing to a domain name that is gone, or one you donΓÇÖt own. | +| Redirect URI configuration changes | High | Microsoft Entra logs | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress | Look for URIs not using HTTPS*, URIs with wildcards at the end or the domain of the URL, URIs that are **not** unique to the application, URIs that point to a domain you don't control. | +| Changes to AppID URI | High | Microsoft Entra logs | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Update Application<br>Activity: Update Service principal | Look for AppID URI modifications, such as adding, modifying, or removing the URI. | +| Changes to application ownership | Medium | Microsoft Entra logs | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Add owner to application | Look for instances of users added as application owners outside normal change management activities. | +| Changes to sign out URL | Low | Microsoft Entra logs | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Update Application<br>-and-<br>Activity: Update service principle | Look for modifications to a sign out URL. Blank entries or entries to non-existent locations would stop a user from terminating a session. ## Infrastructure |
active-directory | Security Operations Devices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/security-operations-devices.md | The log files you use for investigation and monitoring are: * [Azure Key Vault logs](../..//key-vault/general/logging.md?tabs=Vault) -From the Azure portal, you can view the Microsoft Entra audit logs and download as comma-separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Microsoft Entra ID logs with other tools that allow for greater automation of monitoring and alerting: +From the Azure portal, you can view the Microsoft Entra audit logs and download as comma-separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Microsoft Entra logs with other tools that allow for greater automation of monitoring and alerting: * **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities. From the Azure portal, you can view the Microsoft Entra audit logs and download * **[Azure Monitor](../..//azure-monitor/overview.md)** ΓÇô enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources. -* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md) -integrated with a SIEM**- [Microsoft Entra ID logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar, and Sumo Logic via the Azure Event Hubs integration. +* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md) -integrated with a SIEM**- [Microsoft Entra logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar, and Sumo Logic via the Azure Event Hubs integration. * **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance. |
active-directory | Security Operations Infrastructure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/security-operations-infrastructure.md | The log files you use for investigation and monitoring are: * [Azure Key Vault logs](../../key-vault/general/logging.md?tabs=Vault) -From the Azure portal, you can view the Microsoft Entra audit logs and download as comma separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Microsoft Entra ID logs with other tools that allow for greater automation of monitoring and alerting: +From the Azure portal, you can view the Microsoft Entra audit logs and download as comma separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Microsoft Entra logs with other tools that allow for greater automation of monitoring and alerting: * **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô Enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities. From the Azure portal, you can view the Microsoft Entra audit logs and download * **[Azure Monitor](../../azure-monitor/overview.md)** ΓÇô Enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources. -* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md)** integrated with a SIEM - [Microsoft Entra ID logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar and Sumo Logic via the Azure Event Hubs integration. +* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md)** integrated with a SIEM - [Microsoft Entra logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar and Sumo Logic via the Azure Event Hubs integration. * **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô Enables you to discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance. To configure monitoring for Application Proxy, see [Troubleshoot Application Pro For multifactor authentication (MFA) to be effective, you also need to block legacy authentication. You then need to monitor your environment and alert on any use of legacy authentication. Legacy authentication protocols like POP, SMTP, IMAP, and MAPI canΓÇÖt enforce MFA. This makes these protocols the preferred entry points for attackers. For more information on tools that you can use to block legacy authentication, see [New tools to block legacy authentication in your organization](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/new-tools-to-block-legacy-authentication-in-your-organization/ba-p/1225302). -Legacy authentication is captured in the Microsoft Entra Sign-ins log as part of the detail of the event. You can use the Azure Monitor workbook to help with identifying legacy authentication usage. For more information, see [Sign-ins using legacy authentication](../reports-monitoring/howto-use-azure-monitor-workbooks.md), which is part of [How to use Azure Monitor Workbooks for Microsoft Entra ID reports](../reports-monitoring/howto-use-azure-monitor-workbooks.md). You can also use the Insecure protocols workbook for Microsoft Sentinel. For more information, see [Microsoft Sentinel Insecure Protocols Workbook Implementation Guide](https://techcommunity.microsoft.com/t5/azure-sentinel/azure-sentinel-insecure-protocols-workbook-implementation-guide/ba-p/1197564). Specific activities to monitor include: +Legacy authentication is captured in the Microsoft Entra sign-in log as part of the detail of the event. You can use the Azure Monitor workbook to help with identifying legacy authentication usage. For more information, see [Sign-ins using legacy authentication](../reports-monitoring/howto-use-azure-monitor-workbooks.md), which is part of [How to use Azure Monitor Workbooks for Microsoft Entra ID reports](../reports-monitoring/howto-use-azure-monitor-workbooks.md). You can also use the Insecure protocols workbook for Microsoft Sentinel. For more information, see [Microsoft Sentinel Insecure Protocols Workbook Implementation Guide](https://techcommunity.microsoft.com/t5/azure-sentinel/azure-sentinel-insecure-protocols-workbook-implementation-guide/ba-p/1197564). Specific activities to monitor include: | What to monitor| Risk level| Where| Filter/sub-filter| Notes | | - | - | - | - | - |-| Legacy authentications|High | Microsoft Entra Sign-ins log| ClientApp : POP<br>ClientApp : IMAP<br>ClientApp : MAPI<br>ClientApp: SMTP<br>ClientApp : ActiveSync go to EXO<br>Other Clients = SharePoint and EWS| In federated domain environments, failed authentications aren't recorded and don't appear in the log. | +| Legacy authentications|High | Microsoft Entra sign-in log| ClientApp : POP<br>ClientApp : IMAP<br>ClientApp : MAPI<br>ClientApp: SMTP<br>ClientApp : ActiveSync go to EXO<br>Other Clients = SharePoint and EWS| In federated domain environments, failed authentications aren't recorded and don't appear in the log. | <a name='azure-ad-connect'></a> Monitoring single sign-on and Kerberos activity can help you detect general cred | What to monitor| Risk level| Where| Filter/sub-filter| Notes | | - | - | - | - | - |-| Errors associated with SSO and Kerberos validation failures|Medium | Microsoft Entra Sign-ins log| | Single sign-on list of error codes at [Single sign-on](../hybrid/connect/tshoot-connect-sso.md). | +| Errors associated with SSO and Kerberos validation failures|Medium | Microsoft Entra sign-in log| | Single sign-on list of error codes at [Single sign-on](../hybrid/connect/tshoot-connect-sso.md). | | Query for troubleshooting errors|Medium | PowerShell| See query following table. check in each forest with SSO enabled.| Check in each forest with SSO enabled. | | Kerberos-related events|High | Microsoft Defender for Identity monitoring| | Review guidance available at [Microsoft Defender for Identity Lateral Movement Paths (LMPs)](/defender-for-identity/use-case-lateral-movement-path) | The DC agent Admin log is the primary source of information for how the software Complete reference for Microsoft Entra ID audit activities is available at [Microsoft Entra ID audit activity reference](../reports-monitoring/reference-audit-activities.md). ## Conditional Access-In Microsoft Entra ID, you can protect access to your resources by configuring Conditional Access policies. As an IT administrator, you want to ensure your Conditional Access policies work as expected to ensure that your resources are protected. Monitoring and alerting on changes to the Conditional Access service ensures policies defined by your organization for access to data are enforced. Microsoft Entra ID logs when changes are made to Conditional Access and also provides workbooks to ensure your policies are providing the expected coverage. ++In Microsoft Entra ID, you can protect access to your resources by configuring Conditional Access policies. As an IT administrator, you want to ensure your Conditional Access policies work as expected to ensure that your resources are protected. Monitoring and alerting on changes to the Conditional Access service ensures policies defined by your organization for access to data are enforced. Microsoft Entra logs when changes are made to Conditional Access and also provides workbooks to ensure your policies are providing the expected coverage. **Workbook Links** |
active-directory | Security Operations Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/security-operations-introduction.md | The log files you use for investigation and monitoring are: * [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview) * [Azure Key Vault logs](../../key-vault/general/logging.md?tabs=Vault) -From the Azure portal, you can view the Microsoft Entra audit logs. Download logs as comma separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Microsoft Entra ID logs with other tools that allow for greater automation of monitoring and alerting: +From the Azure portal, you can view the Microsoft Entra audit logs. Download logs as comma separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Microsoft Entra logs with other tools that allow for greater automation of monitoring and alerting: * **[Microsoft Sentinel](../../sentinel/overview.md)** - Enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities. From the Azure portal, you can view the Microsoft Entra audit logs. Download log * **[Azure Monitor](../../azure-monitor/overview.md)** - Enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources. -* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md)** integrated with a SIEM. Microsoft Entra ID logs can be integrated to other SIEMs such as Splunk, ArcSight, QRadar and Sumo Logic via the Azure Event Hubs integration. For more information, see [Stream Microsoft Entra ID logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md). +* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md)** integrated with a SIEM. Microsoft Entra logs can be integrated to other SIEMs such as Splunk, ArcSight, QRadar and Sumo Logic via the Azure Event Hubs integration. For more information, see [Stream Microsoft Entra logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md). * **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** - Enables you to discover and manage apps, govern across apps and resources, and check the compliance of your cloud apps. |
active-directory | Security Operations Privileged Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/security-operations-privileged-accounts.md | You can monitor privileged account sign-in events in the Microsoft Entra sign-in | What to monitor | Risk level | Where | Filter/subfilter | Notes | | - | - | - | - | - |-| Sign-in failure, bad password threshold | High | Microsoft Entra Sign-ins log | Status = Failure<br>-and-<br>error code = 50126 | Define a baseline threshold and then monitor and adjust to suit your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/PrivilegedAccountsSigninFailureSpikes.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | -| Failure because of Conditional Access requirement |High | Microsoft Entra Sign-ins log | Status = Failure<br>-and-<br>error code = 53003<br>-and-<br>Failure reason = Blocked by Conditional Access | This event can be an indication an attacker is trying to get into the account.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/UserAccounts-CABlockedSigninSpikes.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Sign-in failure, bad password threshold | High | Microsoft Entra sign-in log | Status = Failure<br>-and-<br>error code = 50126 | Define a baseline threshold and then monitor and adjust to suit your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/PrivilegedAccountsSigninFailureSpikes.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Failure because of Conditional Access requirement |High | Microsoft Entra sign-in log | Status = Failure<br>-and-<br>error code = 53003<br>-and-<br>Failure reason = Blocked by Conditional Access | This event can be an indication an attacker is trying to get into the account.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/UserAccounts-CABlockedSigninSpikes.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | | Privileged accounts that don't follow naming policy| | Azure subscription | [List Azure role assignments using the Azure portal](../../role-based-access-control/role-assignments-list-portal.md)| List role assignments for subscriptions and alert where the sign-in name doesn't match your organization's format. An example is the use of ADM_ as a prefix. | | Interrupt | High, medium | Microsoft Entra Sign-ins | Status = Interrupted<br>-and-<br>error code = 50074<br>-and-<br>Failure reason = Strong auth required<br>Status = Interrupted<br>-and-<br>Error code = 500121<br>Failure reason = Authentication failed during strong authentication request | This event can be an indication an attacker has the password for the account but can't pass the multi-factor authentication challenge.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AADPrivilegedAccountsFailedMFA.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | | Privileged accounts that don't follow naming policy| High | Microsoft Entra directory | [List Microsoft Entra role assignments](../roles/view-assignments.md)| List role assignments for Microsoft Entra roles and alert where the UPN doesn't match your organization's format. An example is the use of ADM_ as a prefix. | | Discover privileged accounts not registered for multi-factor authentication | High | Microsoft Graph API| Query for IsMFARegistered eq false for admin accounts. [List credentialUserRegistrationDetails - Microsoft Graph beta](/graph/api/reportroot-list-credentialuserregistrationdetails?view=graph-rest-beta&preserve-view=true&tabs=http) | Audit and investigate to determine if the event is intentional or an oversight. |-| Account lockout | High | Microsoft Entra Sign-ins log | Status = Failure<br>-and-<br>error code = 50053 | Define a baseline threshold, and then monitor and adjust to suit your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/PrivilegedAccountsLockedOut.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | -| Account disabled or blocked for sign-ins | Low | Microsoft Entra Sign-ins log | Status = Failure<br>-and-<br>Target = User UPN<br>-and-<br>error code = 50057 | This event could indicate someone is trying to gain access to an account after they've left the organization. Although the account is blocked, it's still important to log and alert on this activity.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-BlockedAccounts.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | -| MFA fraud alert or block | High | Microsoft Entra Sign-ins log/Azure Log Analytics | Sign-ins>Authentication details Result details = MFA denied, fraud code entered | Privileged user has indicated they haven't instigated the multi-factor authentication prompt, which could indicate an attacker has the password for the account.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/MFARejectedbyUser.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Account lockout | High | Microsoft Entra sign-in log | Status = Failure<br>-and-<br>error code = 50053 | Define a baseline threshold, and then monitor and adjust to suit your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/PrivilegedAccountsLockedOut.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Account disabled or blocked for sign-ins | Low | Microsoft Entra sign-in log | Status = Failure<br>-and-<br>Target = User UPN<br>-and-<br>error code = 50057 | This event could indicate someone is trying to gain access to an account after they've left the organization. Although the account is blocked, it's still important to log and alert on this activity.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-BlockedAccounts.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| MFA fraud alert or block | High | Microsoft Entra sign-in log/Azure Log Analytics | Sign-ins>Authentication details Result details = MFA denied, fraud code entered | Privileged user has indicated they haven't instigated the multi-factor authentication prompt, which could indicate an attacker has the password for the account.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/MFARejectedbyUser.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | | MFA fraud alert or block | High | Microsoft Entra audit log log/Azure Log Analytics | Activity type = Fraud reported - User is blocked for MFA or fraud reported - No action taken (based on tenant-level settings for fraud report) | Privileged user has indicated they haven't instigated the multi-factor authentication prompt, which could indicate an attacker has the password for the account.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/MFARejectedbyUser.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |-| Privileged account sign-ins outside of expected controls | | Microsoft Entra Sign-ins log | Status = Failure<br>UserPricipalName = \<Admin account\><br>Location = \<unapproved location\><br>IP address = \<unapproved IP\><br>Device info = \<unapproved Browser, Operating System\> | Monitor and alert on any entries that you've defined as unapproved.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | -| Outside of normal sign-in times | High | Microsoft Entra Sign-ins log | Status = Success<br>-and-<br>Location =<br>-and-<br>Time = Outside of working hours | Monitor and alert if sign-ins occur outside of expected times. It's important to find the normal working pattern for each privileged account and to alert if there are unplanned changes outside of normal working times. Sign-ins outside of normal working hours could indicate compromise or possible insider threats.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AnomolousSignInsBasedonTime.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Privileged account sign-ins outside of expected controls | | Microsoft Entra sign-in log | Status = Failure<br>UserPricipalName = \<Admin account\><br>Location = \<unapproved location\><br>IP address = \<unapproved IP\><br>Device info = \<unapproved Browser, Operating System\> | Monitor and alert on any entries that you've defined as unapproved.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Outside of normal sign-in times | High | Microsoft Entra sign-in log | Status = Success<br>-and-<br>Location =<br>-and-<br>Time = Outside of working hours | Monitor and alert if sign-ins occur outside of expected times. It's important to find the normal working pattern for each privileged account and to alert if there are unplanned changes outside of normal working times. Sign-ins outside of normal working hours could indicate compromise or possible insider threats.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AnomolousSignInsBasedonTime.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | | Identity protection risk | High | Identity Protection logs | Risk state = At risk<br>-and-<br>Risk level = Low, medium, high<br>-and-<br>Activity = Unfamiliar sign-in/TOR, and so on | This event indicates there's some abnormality detected with the sign-in for the account and should be alerted on. | | Password change | High | Microsoft Entra audit logs | Activity actor = Admin/self-service<br>-and-<br>Target = User<br>-and-<br>Status = Success or failure | Alert on any admin account password changes, especially for global admins, user admins, subscription admins, and emergency access accounts. Write a query targeted at all privileged accounts.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/PrivilegedAccountPasswordChanges.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |-| Change in legacy authentication protocol | High | Microsoft Entra Sign-ins log | Client App = Other client, IMAP, POP3, MAPI, SMTP, and so on<br>-and-<br>Username = UPN<br>-and-<br>Application = Exchange (example) | Many attacks use legacy authentication, so if there's a change in auth protocol for the user, it could be an indication of an attack.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/17ead56ae30b1a8e46bb0f95a458bdeb2d30ba9b/Hunting%20Queries/SigninLogs/LegacyAuthAttempt.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | -| New device or location | High | Microsoft Entra Sign-ins log | Device info = Device ID<br>-and-<br>Browser<br>-and-<br>OS<br>-and-<br>Compliant/Managed<br>-and-<br>Target = User<br>-and-<br>Location | Most admin activity should be from [privileged access devices](/security/compass/privileged-access-devices), from a limited number of locations. For this reason, alert on new devices or locations.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Change in legacy authentication protocol | High | Microsoft Entra sign-in log | Client App = Other client, IMAP, POP3, MAPI, SMTP, and so on<br>-and-<br>Username = UPN<br>-and-<br>Application = Exchange (example) | Many attacks use legacy authentication, so if there's a change in auth protocol for the user, it could be an indication of an attack.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/17ead56ae30b1a8e46bb0f95a458bdeb2d30ba9b/Hunting%20Queries/SigninLogs/LegacyAuthAttempt.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| New device or location | High | Microsoft Entra sign-in log | Device info = Device ID<br>-and-<br>Browser<br>-and-<br>OS<br>-and-<br>Compliant/Managed<br>-and-<br>Target = User<br>-and-<br>Location | Most admin activity should be from [privileged access devices](/security/compass/privileged-access-devices), from a limited number of locations. For this reason, alert on new devices or locations.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | | Audit alert setting is changed | High | Microsoft Entra audit logs | Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity = Disable PIM alert<br>-and-<br>Status = Success | Changes to a core alert should be alerted if unexpected.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SecurityAlert/DetectPIMAlertDisablingActivity.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |-| Administrators authenticating to other Microsoft Entra tenants| Medium| Microsoft Entra Sign-ins log| Status = success<br><br>Resource tenantID != Home Tenant ID| When scoped to Privileged Users, this monitor detects when an administrator has successfully authenticated to another Microsoft Entra tenant with an identity in your organization's tenant. <br><br>Alert if Resource TenantID isn't equal to Home Tenant ID<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/AdministratorsAuthenticatingtoAnotherAzureADTenant.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Administrators authenticating to other Microsoft Entra tenants| Medium| Microsoft Entra sign-in log| Status = success<br><br>Resource tenantID != Home Tenant ID| When scoped to Privileged Users, this monitor detects when an administrator has successfully authenticated to another Microsoft Entra tenant with an identity in your organization's tenant. <br><br>Alert if Resource TenantID isn't equal to Home Tenant ID<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/AdministratorsAuthenticatingtoAnotherAzureADTenant.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | |Admin User state changed from Guest to Member|Medium|Microsoft Entra audit logs|Activity: Update user<br><br>Category: UserManagement<br><br>UserType changed from Guest to Member|Monitor and alert on change of user type from Guest to Member.<br><br> Was this change expected?<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserStatechangedfromGuesttoMember.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | |Guest users invited to tenant by non-approved inviters|Medium|Microsoft Entra audit logs|Activity: Invite external user<br><br>Category: UserManagement<br><br>Initiated by (actor): User Principal Name|Monitor and alert on non-approved actors inviting external users.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/GuestUsersInvitedtoTenantbyNewInviters.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | |
active-directory | Security Operations User Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/security-operations-user-accounts.md | For more information, visit [What is Identity Protection](../identity-protection ### What to look for -Configure monitoring on the data within the Microsoft Entra Sign-ins Logs to ensure that alerting occurs and adheres to your organization's security policies. Some examples of this are: +Configure monitoring on the data within the Microsoft Entra sign-in logs to ensure that alerting occurs and adheres to your organization's security policies. Some examples of this are: * **Failed Authentications**: As humans we all get our passwords wrong from time to time. However, many failed authentications can indicate that a bad actor is trying to obtain access. Attacks differ in ferocity but can range from a few attempts per hour to a much higher rate. For example, Password Spray normally preys on easier passwords against many accounts, while Brute Force attempts many passwords against targeted accounts. The following are listed in order of importance based on the effect and severity | What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |-| Users authenticating to other Microsoft Entra tenants.| Low| Microsoft Entra Sign-ins log| Status = success<br>Resource tenantID != Home Tenant ID| Detects when a user has successfully authenticated to another Microsoft Entra tenant with an identity in your organization's tenant.<br>Alert if Resource TenantID isn't equal to Home Tenant ID <br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/AuditLogs/UsersAuthenticatingtoOtherAzureADTenants.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)| +| Users authenticating to other Microsoft Entra tenants.| Low| Microsoft Entra sign-in log| Status = success<br>Resource tenantID != Home Tenant ID| Detects when a user has successfully authenticated to another Microsoft Entra tenant with an identity in your organization's tenant.<br>Alert if Resource TenantID isn't equal to Home Tenant ID <br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/AuditLogs/UsersAuthenticatingtoOtherAzureADTenants.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)| |User state changed from Guest to Member|Medium|Microsoft Entra audit logs|Activity: Update user<br>Category: UserManagement<br>UserType changed from Guest to Member|Monitor and alert on change of user type from Guest to Member. Was this expected?<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserStatechangedfromGuesttoMember.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |Guest users invited to tenant by non-approved inviters|Medium|Microsoft Entra audit logs|Activity: Invite external user<br>Category: UserManagement<br>Initiated by (actor): User Principal Name|Monitor and alert on non-approved actors inviting external users.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/GuestUsersInvitedtoTenantbyNewInviters.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)| The following are listed in order of importance based on the effect and severity | What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |-| Failed sign-in attempts.| Medium - if Isolated Incident<br>High - if many accounts are experiencing the same pattern or a VIP.| Microsoft Entra Sign-ins log| Status = failed<br>-and-<br>Sign-in error code 50126 - <br>Error validating credentials due to invalid username or password.| Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SpikeInFailedSignInAttempts.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | -| Smart lock-out events.| Medium - if Isolated Incident<br>High - if many accounts are experiencing the same pattern or a VIP.| Microsoft Entra Sign-ins log| Status = failed<br>-and-<br>Sign-in error code = 50053 ΓÇô IdsLocked| Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SmartLockouts.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)| -| Interrupts| Medium - if Isolated Incident<br>High - if many accounts are experiencing the same pattern or a VIP.| Microsoft Entra Sign-ins log| 500121, Authentication failed during strong authentication request. <br>-or-<br>50097, Device authentication is required or 50074, Strong Authentication is required. <br>-or-<br>50155, DeviceAuthenticationFailed<br>-or-<br>50158, ExternalSecurityChallenge - External security challenge wasn't satisfied<br>-or-<br>53003 and Failure reason = blocked by Conditional Access| Monitor and alert on interrupts.<br>Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AADPrivilegedAccountsFailedMFA.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Failed sign-in attempts.| Medium - if Isolated Incident<br>High - if many accounts are experiencing the same pattern or a VIP.| Microsoft Entra sign-in log| Status = failed<br>-and-<br>Sign-in error code 50126 - <br>Error validating credentials due to invalid username or password.| Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SpikeInFailedSignInAttempts.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Smart lock-out events.| Medium - if Isolated Incident<br>High - if many accounts are experiencing the same pattern or a VIP.| Microsoft Entra sign-in log| Status = failed<br>-and-<br>Sign-in error code = 50053 ΓÇô IdsLocked| Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SmartLockouts.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)| +| Interrupts| Medium - if Isolated Incident<br>High - if many accounts are experiencing the same pattern or a VIP.| Microsoft Entra sign-in log| 500121, Authentication failed during strong authentication request. <br>-or-<br>50097, Device authentication is required or 50074, Strong Authentication is required. <br>-or-<br>50155, DeviceAuthenticationFailed<br>-or-<br>50158, ExternalSecurityChallenge - External security challenge wasn't satisfied<br>-or-<br>53003 and Failure reason = blocked by Conditional Access| Monitor and alert on interrupts.<br>Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AADPrivilegedAccountsFailedMFA.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | The following are listed in order of importance based on the effect and severity of the entries. | What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |-| Multi-factor authentication (MFA) fraud alerts.| High| Microsoft Entra Sign-ins log| Status = failed<br>-and-<br>Details = MFA Denied<br>| Monitor and alert on any entry.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/MFARejectedbyUser.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)| -| Failed authentications from countries/regions you don't operate out of.| Medium| Microsoft Entra Sign-ins log| Location = \<unapproved location\>| Monitor and alert on any entries. <br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/AuthenticationAttemptfromNewCountry.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | -| Failed authentications for legacy protocols or protocols that aren't used.| Medium| Microsoft Entra Sign-ins log| Status = failure<br>-and-<br>Client app = Other Clients, POP, IMAP, MAPI, SMTP, ActiveSync| Monitor and alert on any entries.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/9bd30c2d4f6a2de17956cd11536a83adcbfc1757/Hunting%20Queries/SigninLogs/LegacyAuthAttempt.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | -| Failures blocked by Conditional Access.| Medium| Microsoft Entra Sign-ins log| Error code = 53003 <br>-and-<br>Failure reason = blocked by Conditional Access| Monitor and alert on any entries.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/UserAccounts-CABlockedSigninSpikes.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | -| Increased failed authentications of any type.| Medium| Microsoft Entra Sign-ins log| Capture increases in failures across the board. That is, the failure total for today is >10% on the same day, the previous week.| If you don't have a set threshold, monitor and alert if failures increase by 10% or greater.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SpikeInFailedSignInAttempts.yaml) | -| Authentication occurring at times and days of the week when countries/regions don't conduct normal business operations.| Low| Microsoft Entra Sign-ins log| Capture interactive authentication occurring outside of normal operating days\time. <br>Status = success<br>-and-<br>Location = \<location\><br>-and-<br>Day\Time = \<not normal working hours\>| Monitor and alert on any entries.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AnomolousSignInsBasedonTime.yaml) | -| Account disabled/blocked for sign-ins| Low| Microsoft Entra Sign-ins log| Status = Failure<br>-and-<br>error code = 50057, The user account is disabled.| This could indicate someone is trying to gain access to an account once they have left an organization. Although the account is blocked, it is important to log and alert on this activity.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-BlockedAccounts.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Multi-factor authentication (MFA) fraud alerts.| High| Microsoft Entra sign-in log| Status = failed<br>-and-<br>Details = MFA Denied<br>| Monitor and alert on any entry.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/MFARejectedbyUser.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)| +| Failed authentications from countries/regions you don't operate out of.| Medium| Microsoft Entra sign-in log| Location = \<unapproved location\>| Monitor and alert on any entries. <br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/AuthenticationAttemptfromNewCountry.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Failed authentications for legacy protocols or protocols that aren't used.| Medium| Microsoft Entra sign-in log| Status = failure<br>-and-<br>Client app = Other Clients, POP, IMAP, MAPI, SMTP, ActiveSync| Monitor and alert on any entries.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/9bd30c2d4f6a2de17956cd11536a83adcbfc1757/Hunting%20Queries/SigninLogs/LegacyAuthAttempt.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Failures blocked by Conditional Access.| Medium| Microsoft Entra sign-in log| Error code = 53003 <br>-and-<br>Failure reason = blocked by Conditional Access| Monitor and alert on any entries.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/UserAccounts-CABlockedSigninSpikes.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Increased failed authentications of any type.| Medium| Microsoft Entra sign-in log| Capture increases in failures across the board. That is, the failure total for today is >10% on the same day, the previous week.| If you don't have a set threshold, monitor and alert if failures increase by 10% or greater.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SpikeInFailedSignInAttempts.yaml) | +| Authentication occurring at times and days of the week when countries/regions don't conduct normal business operations.| Low| Microsoft Entra sign-in log| Capture interactive authentication occurring outside of normal operating days\time. <br>Status = success<br>-and-<br>Location = \<location\><br>-and-<br>Day\Time = \<not normal working hours\>| Monitor and alert on any entries.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AnomolousSignInsBasedonTime.yaml) | +| Account disabled/blocked for sign-ins| Low| Microsoft Entra sign-in log| Status = Failure<br>-and-<br>error code = 50057, The user account is disabled.| This could indicate someone is trying to gain access to an account once they have left an organization. Although the account is blocked, it is important to log and alert on this activity.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-BlockedAccounts.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | ### Monitoring for successful unusual sign ins | What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |-| Authentications of privileged accounts outside of expected controls.| High| Microsoft Entra Sign-ins log| Status = success<br>-and-<br>UserPricipalName = \<Admin account\><br>-and-<br>Location = \<unapproved location\><br>-and-<br>IP Address = \<unapproved IP\><br>Device Info= \<unapproved Browser, Operating System\><br>| Monitor and alert on successful authentication for privileged accounts outside of expected controls. Three common controls are listed. <br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/AuthenticationsofPrivilegedAccountsOutsideofExpectedControls.yaml)<br>[Sigma ruless](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | -| When only single-factor authentication is required.| Low| Microsoft Entra Sign-ins log| Status = success<br>Authentication requirement = Single-factor authentication| Monitor periodically and ensure expected behavior.<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Authentications of privileged accounts outside of expected controls.| High| Microsoft Entra sign-in log| Status = success<br>-and-<br>UserPricipalName = \<Admin account\><br>-and-<br>Location = \<unapproved location\><br>-and-<br>IP Address = \<unapproved IP\><br>Device Info= \<unapproved Browser, Operating System\><br>| Monitor and alert on successful authentication for privileged accounts outside of expected controls. Three common controls are listed. <br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/AuthenticationsofPrivilegedAccountsOutsideofExpectedControls.yaml)<br>[Sigma ruless](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| When only single-factor authentication is required.| Low| Microsoft Entra sign-in log| Status = success<br>Authentication requirement = Single-factor authentication| Monitor periodically and ensure expected behavior.<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | | Discover privileged accounts not registered for MFA.| High| Azure Graph API| Query for IsMFARegistered eq false for administrator accounts. <br>[List credentialUserRegistrationDetails - Microsoft Graph beta](/graph/api/reportroot-list-credentialuserregistrationdetails?view=graph-rest-beta&preserve-view=true&tabs=http)| Audit and investigate to determine if intentional or an oversight. |-| Successful authentications from countries/regions your organization doesn't operate out of.| Medium| Microsoft Entra Sign-ins log| Status = success<br>Location = \<unapproved country/region\>| Monitor and alert on any entries not equal to the city names you provide.<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | -| Successful authentication, session blocked by Conditional Access.| Medium| Microsoft Entra Sign-ins log| Status = success<br>-and-<br>error code = 53003 ΓÇô Failure reason, blocked by Conditional Access| Monitor and investigate when authentication is successful, but session is blocked by Conditional Access.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/UserAccounts-CABlockedSigninSpikes.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | -| Successful authentication after you have disabled legacy authentication.| Medium| Microsoft Entra Sign-ins log| status = success <br>-and-<br>Client app = Other Clients, POP, IMAP, MAPI, SMTP, ActiveSync| If your organization has disabled legacy authentication, monitor and alert when successful legacy authentication has taken place.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/9bd30c2d4f6a2de17956cd11536a83adcbfc1757/Hunting%20Queries/SigninLogs/LegacyAuthAttempt.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Successful authentications from countries/regions your organization doesn't operate out of.| Medium| Microsoft Entra sign-in log| Status = success<br>Location = \<unapproved country/region\>| Monitor and alert on any entries not equal to the city names you provide.<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Successful authentication, session blocked by Conditional Access.| Medium| Microsoft Entra sign-in log| Status = success<br>-and-<br>error code = 53003 ΓÇô Failure reason, blocked by Conditional Access| Monitor and investigate when authentication is successful, but session is blocked by Conditional Access.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/UserAccounts-CABlockedSigninSpikes.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Successful authentication after you have disabled legacy authentication.| Medium| Microsoft Entra sign-in log| status = success <br>-and-<br>Client app = Other Clients, POP, IMAP, MAPI, SMTP, ActiveSync| If your organization has disabled legacy authentication, monitor and alert when successful legacy authentication has taken place.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/9bd30c2d4f6a2de17956cd11536a83adcbfc1757/Hunting%20Queries/SigninLogs/LegacyAuthAttempt.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | We recommend you periodically review authentications to medium business impact (MBI) and high business impact (HBI) applications where only single-factor authentication is required. For each, you want to determine if single-factor authentication was expected or not. In addition, review for successful authentication increases or at unexpected times, based on the location. | What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - | - |- |- |- |-| Authentications to MBI and HBI application using single-factor authentication.| Low| Microsoft Entra Sign-ins log| status = success<br>-and-<br>Application ID = \<HBI app\> <br>-and-<br>Authentication requirement = single-factor authentication.| Review and validate this configuration is intentional.<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | -| Authentications at days and times of the week or year that countries/regions do not conduct normal business operations.| Low| Microsoft Entra Sign-ins log| Capture interactive authentication occurring outside of normal operating days\time. <br>Status = success<br>Location = \<location\><br>Date\Time = \<not normal working hours\>| Monitor and alert on authentications days and times of the week or year that countries/regions do not conduct normal business operations.<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | -| Measurable increase of successful sign ins.| Low| Microsoft Entra Sign-ins log| Capture increases in successful authentication across the board. That is, success totals for today are >10% on the same day, the previous week.| If you don't have a set threshold, monitor and alert if successful authentications increase by 10% or greater.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccountsMeasurableincreaseofsuccessfulsignins.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Authentications to MBI and HBI application using single-factor authentication.| Low| Microsoft Entra sign-in log| status = success<br>-and-<br>Application ID = \<HBI app\> <br>-and-<br>Authentication requirement = single-factor authentication.| Review and validate this configuration is intentional.<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Authentications at days and times of the week or year that countries/regions do not conduct normal business operations.| Low| Microsoft Entra sign-in log| Capture interactive authentication occurring outside of normal operating days\time. <br>Status = success<br>Location = \<location\><br>Date\Time = \<not normal working hours\>| Monitor and alert on authentications days and times of the week or year that countries/regions do not conduct normal business operations.<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Measurable increase of successful sign ins.| Low| Microsoft Entra sign-in log| Capture increases in successful authentication across the board. That is, success totals for today are >10% on the same day, the previous week.| If you don't have a set threshold, monitor and alert if successful authentications increase by 10% or greater.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccountsMeasurableincreaseofsuccessfulsignins.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | ## Next steps |
active-directory | Concept Authentication Methods Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-methods-manage.md | Microsoft Entra ID allows the use of a range of authentication methods to suppor The Authentication methods policy is the recommended way to manage authentication methods, including modern methods like passwordless authentication. [Authentication Policy Administrators](../roles/permissions-reference.md#authentication-policy-administrator) can edit this policy to enable authentication methods for all users or specific groups. -Methods enabled in the Authentication methods policy can typically be used anywhere in Microsoft Entra ID - for both authentication and password reset scenarios. The exception is that some methods are inherently limited to use in authentication, such as FIDO2 and Windows Hello for Business, and others are limited to use in password reset, such as security questions. For more control over which methods are usable in a given authentication scenario, consider using the **Authentication Strengths** feature. +Methods enabled in the Authentication methods policy can typically be used anywhere in Microsoft Entra ID, for both authentication and password reset scenarios. The exception is that some methods are inherently limited to use in authentication, such as FIDO2 and Windows Hello for Business, and others are limited to use in password reset, such as security questions. For more control over which methods are usable in a given authentication scenario, consider using the **Authentication Strengths** feature. Most methods also have configuration parameters to more precisely control how that method can be used. For example, if you enable **Voice calls**, you can also specify whether an office phone can be used in addition to a mobile phone. Only the [converged registration experience](concept-registration-mfa-sspr-combi ## Legacy MFA and SSPR policies -Two other policies, located in **multifactor authentication** settings and **Password reset** settings, provide a legacy way to manage some authentication methods for all users in the tenant. You can't control who uses an enabled authentication method, or how the method can be used. A [Global Administrator](../roles/permissions-reference.md#global-administrator) is needed to manage these policies. +Two other policies, located in **Multifactor authentication** settings and **Password reset** settings, provide a legacy way to manage some authentication methods for all users in the tenant. You can't control who uses an enabled authentication method, or how the method can be used. A [Global Administrator](../roles/permissions-reference.md#global-administrator) is needed to manage these policies. >[!Important] >In March 2023, we announced the deprecation of managing authentication methods in the legacy multifactor authentication and self-service password reset (SSPR) policies. Beginning September 30, 2024, authentication methods can't be managed in these legacy MFA and SSPR policies. We recommend customers use the manual migration control to migrate to the Authentication methods policy by the deprecation date. -To manage the legacy MFA policy, click **Security** > **multifactor authentication** > **Additional cloud-based multifactor authentication settings**. +To manage the legacy MFA policy, select **Security** > **Multifactor authentication** > **Additional cloud-based multifactor authentication settings**. :::image type="content" border="true" source="./media/concept-authentication-methods-manage/service-settings.png" alt-text="Screenshot of MFA service settings."::: |
active-directory | Concept Authentication Operator Assistance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-operator-assistance.md | For example, let's say a customer in U.S has an office phone number 425-555-1234 If the setting is **Off**, the system will automatically dial extensions as part of the phone number. Your admin can still specify individual users who should be enabled for operator assistance by prefixing the extension with ΓÇÿ@ΓÇÖ. For example, 425-555-1234x@5678 would indicate that operator assistance should be used, even though the setting is **Off**. -To check the status of this feature in your own tenant, sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator), then click **Protection** > **multifactor authentication** > **Phone call settings**. Check **Operator required to transfer extensions** to see if the setting is **On** or **Off**. +To check the status of this feature in your own tenant, sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator), then click **Protection** > **Multifactor authentication** > **Phone call settings**. Check **Operator required to transfer extensions** to see if the setting is **On** or **Off**. ![Screenshot of operator assistance settings](./media/concept-authentication-operator-assistance/settings.png) |
active-directory | Concept Authentication Strengths | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-strengths.md | There are two policies that determine which authentication methods can be used t :::image type="content" border="true" source="./media/concept-authentication-strengths/authentication-methods-policy.png" alt-text="Screenshot of Authentication methods policy."::: -- **Security** > **Multifactor Authentication** > **Additional cloud-based multifactor authentication settings** is a legacy way to control multifactor authentication methods for all of the users in the tenant. +- **Security** > **Multifactor authentication** > **Additional cloud-based multifactor authentication settings** is a legacy way to control multifactor authentication methods for all of the users in the tenant. :::image type="content" border="true" source="./media/concept-authentication-strengths/service-settings.png" alt-text="Screenshot of MFA service settings."::: |
active-directory | Concept Certificate Based Authentication Certificateuserids | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-certificateuserids.md | Content-Type: application/json For the configuration, you can use the [Azure Active Directory PowerShell Version 2](/powershell/microsoftgraph/installation): -1. Start Windows PowerShell with administrator privileges. +1. Start PowerShell with administrator privileges. 1. Install and Import the Microsoft Graph PowerShell SDK ```powershell |
active-directory | Concept Certificate Based Authentication Technical Deep Dive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-technical-deep-dive.md | Now we'll walk through each step: 1. Microsoft Entra ID checks whether CBA is enabled for the tenant. If CBA is enabled, the user sees a link to **Use a certificate or smartcard** on the password page. If the user doesn't see the sign-in link, make sure CBA is enabled on the tenant. For more information, see [How do I enable Microsoft Entra CBA?](./certificate-based-authentication-faq.yml#how-can-an-administrator-enable-microsoft-entra-cba-). >[!NOTE]- > If CBA is enabled on the tenant, all users will see the link to **Use a certificate or smart card** on the password page. However, only the users in scope for CBA will be able to authenticate successfully against an application that uses Microsoft Entra ID as their Identity provider (IdP). + > If CBA is enabled on the tenant, all users see the link to **Use a certificate or smart card** on the password page. However, only the users in scope for CBA can authenticate successfully against an application that uses Microsoft Entra ID as their Identity provider (IdP). :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/sign-in-cert.png" alt-text="Screenshot of the Use a certificate or smart card."::: Now we'll walk through each step: :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/entry.png" alt-text="Screenshot of the entry for X.509 certificate."::: -1. Microsoft Entra ID will request a client certificate, the user picks the client certificate, and clicks **Ok**. +1. Microsoft Entra ID requests a client certificate, the user picks the client certificate, and clicks **Ok**. >[!NOTE] >Trusted CA hints are not supported, so the list of certificates can't be further scoped. We're looking into adding this functionality in the future. Now we'll walk through each step: :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/cert-picker.png" alt-text="Screenshot of the certificate picker." lightbox="./media/concept-certificate-based-authentication-technical-deep-dive/cert-picker.png"::: 1. Microsoft Entra ID verifies the certificate revocation list to make sure the certificate isn't revoked and is valid. Microsoft Entra ID identifies the user by using the [username binding configured](how-to-certificate-based-authentication.md#step-4-configure-username-binding-policy) on the tenant to map the certificate field value to the user attribute value.-1. If a unique user is found with a Conditional Access policy that requires multifactor authentication, and the [certificate authentication binding rule](how-to-certificate-based-authentication.md#step-3-configure-authentication-binding-policy) satisfies MFA, then Microsoft Entra ID signs the user in immediately. If MFA is required but the certificate satisfies only a single factor, either passwordless sign-in or FIDO2 will be offered as a second factor if they are already registered. +1. If a unique user is found with a Conditional Access policy that requires multifactor authentication, and the [certificate authentication binding rule](how-to-certificate-based-authentication.md#step-3-configure-authentication-binding-policy) satisfies MFA, then Microsoft Entra ID signs the user in immediately. If MFA is required but the certificate satisfies only a single factor, either passwordless sign-in or FIDO2 are offered as a second factor if they are already registered. 1. Microsoft Entra ID completes the sign-in process by sending a primary refresh token back to indicate successful sign-in. 1. If the user sign-in is successful, the user can access the application. ## Certificate-based authentication is MFA capable -Microsoft Entra CBA is an MFA (multifactor authentication) capable method, that is Microsoft Entra CBA can be either Single (SF) or multifactor (MF) depending on the tenant configuration. Enabling CBA for a user indicates the user is potentially capable of MFA. This means a user may need additional configuration to get MFA and proof up to register other authentication methods when the user is in scope for CBA. +Microsoft Entra CBA is capable of multifactor authentication (MFA) method. Microsoft Entra CBA can be either single-factor (SF) or multifactor (MF) depending on the tenant configuration. Enabling CBA makes a user potentially capable to complete MFA. A user may need more configuration to complete MFA, and proof up to register other authentication methods when the user is in scope for CBA. -If CBA enabled user only has a Single Factor (SF) certificate and need MFA - 1. Use Password + SF certificate. - 1. Issue Temporary Access Pass (TAP) - 1. Admin adds Phone Number to user account and allows Voice/text message method for user. +If the CBA-enabled user only has a Single Factor (SF) certificate and needs to complete MFA: + 1. Use a password and SF certificate. + 1. Issue a Temporary Access Pass. + 1. Authentication Policy Administrator adds a phone number and allows voice/text message authentication for the user account. -If CBA enabled user has not yet been issued a certificate and need MFA - 1. Issue Temporary Access Pass (TAP) - 1. Admin adds Phone Number to user account and allows Voice/text message method for user. +If the CBA-enabled user hasn't yet been issued a certificate and needs to complete MFA: + 1. Issue a Temporary Access Pass. + 1. Authentication Policy Administrator adds a phone number and allows voice/text message authentication for the user account. -If CBA enabled user cannot use MF cert (such as on mobile device without smart card support) and need MFA - 1. Issue Temporary Access Pass (TAP) - 1. User Register another MFA method (when user can use MF cert) - 1. Use Password + MF cert (when user can use MF cert) - 1. Admin adds Phone Number to user account and allows Voice/text message method for user +If the CBA-enabled user can't use an MF cert, such as on mobile device without smart card support, and needs to complete MFA: + 1. Issue a Temporary Access Pass. + 1. User needs to register another MFA method (when user can use MF cert). + 1. Use password and MF cert (when user can use MF cert). + 1. Authentication Policy Administrator adds a phone number and allows voice/text message authentication for the user account. ## MFA with Single-factor certificate-based authentication If CBA enabled user cannot use MF cert (such as on mobile device without smart c Microsoft Entra CBA can be used as a second factor to meet MFA requirements with single-factor certificates. Some of the supported combinations are -1. CBA (first factor) + passwordless phone sign-in (PSI as second factor) -1. CBA (first factor) + FIDO2 security keys (second factor) -1. Password (first factor) + CBA (second factor) +1. CBA (first factor) and passwordless phone sign-in (PSI as second factor) +1. CBA (first factor) and FIDO2 security keys (second factor) +1. Password (first factor) and CBA (second factor) Users need to have another way to get MFA and register passwordless sign-in or FIDO2 in advance to signing in with Microsoft Entra CBA. >[!IMPORTANT]->A user will be considered MFA capable when a user is in scope for Certificate-based authentication auth method. This means user will not be able to use proof up as part of their authentication to registerd other available methods. Make sure users who do not have a valid certificate are not part of CBA auth method scope. More info on [Microsoft Entra multifactor authentication](../authentication/concept-mfa-howitworks.md) +>A user is considered MFA capable when they are included in the CBA method settings. This means the user can't use proof up as part of their authentication to register other available methods. Make sure users without a valid certificate aren't included in the CBA method settings. For more information about how authentication works, see [Microsoft Entra multifactor authentication](../authentication/concept-mfa-howitworks.md). **Steps to set up passwordless phone signin(PSI) with CBA** For passwordless sign-in to work, users should disable legacy notification throu 1. Follow the steps at [Enable passwordless phone sign-in authentication](../authentication/howto-authentication-passwordless-phone.md#enable-passwordless-phone-sign-in-authentication-methods) >[!IMPORTANT]- >In the above configuration under step 4, please choose **Passwordless** option. Change the mode for each groups added for PSI for **Authentication mode**, choose **Passwordless** for passwordless sign-in to work with CBA. If the admin configures "Any", CBA + PSI will not work. + >In the above configuration under step 4, please choose **Passwordless** option. Change the mode for each groups added for PSI for **Authentication mode**, choose **Passwordless** for passwordless sign-in to work with CBA. If the admin configures "Any", CBA and PSI don't work. -1. Select **Protection** > **multifactor authentication** > **Additional cloud-based multifactor authentication settings**. +1. Select **Protection** > **Multifactor authentication** > **Additional cloud-based multifactor authentication settings**. :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/configure.png" alt-text="Screenshot of how to configure multifactor authentication settings."::: Let's look at an example of a user who has single factor certificates and has co :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/cert-picker.png" alt-text="Screenshot of how to select a certificate."::: -1. Because the certificate is configured to be single-factor authentication strength, the user needs a second factor to meet MFA requirements. The user will see available second factors, which in this case is passwordless sign-in. Select **Approve a request on my Microsoft Authenticator app**. +1. Because the certificate is configured to be single-factor authentication strength, the user needs a second factor to meet MFA requirements. The user sees available second factors, which in this case is passwordless sign-in. Select **Approve a request on my Microsoft Authenticator app**. :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/second-factor-request.png" alt-text="Screenshot of second factor request."::: 1. You'll get a notification on your phone. Select **Approve Sign-in?**. Let's look at an example of a user who has single factor certificates and has co :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/number.png" alt-text="Screenshot of number match."::: -1. Select **Yes** and user will be authenticated and signed in. +1. Select **Yes** and user can authenticate and sign in. ## Understanding the authentication binding policy -The authentication binding policy helps determine the strength of authentication as either single-factor or multifactor. An administrator can change the default value from single factor to multifactor, or set up custom policy configurations either by using issuer subject or policy OID fields in the certificate. +The authentication binding policy helps determine the strength of authentication as either single-factor or multifactor. An administrator can change the default value from single-factor to multifactor, or set up custom policy configurations either by using issuer subject or policy OID fields in the certificate. ### Certificate strengths When a user has a multifactor certificate, they can perform multifactor authenti Because multiple authentication binding policy rules can be created with different certificate fields, there are some rules that determine the authentication protection level. They are as follows: -1. Exact match is used for strong authentication by using policy OID. If you have a certificate A with policy OID **1.2.3.4.5** and a derived credential B based on that certificate has a policy OID **1.2.3.4.5.6**, and the custom rule is defined as **Policy OID** with value **1.2.3.4.5** with MFA, only certificate A will satisfy MFA, and credential B will satisfy only single-factor authentication. If the user used derived credential during sign-in and was configured to have MFA, the user will be asked for a second factor for successful authentication. -1. Policy OID rules will take precedence over certificate issuer rules. If a certificate has both policy OID and Issuer, the policy OID is always checked first, and if no policy rule is found then the issuer subject bindings are checked. Policy OID has a higher strong authentication binding priority than the issuer. +1. Exact match is used for strong authentication by using policy OID. If you have a certificate A with policy OID **1.2.3.4.5** and a derived credential B based on that certificate has a policy OID **1.2.3.4.5.6**, and the custom rule is defined as **Policy OID** with value **1.2.3.4.5** with MFA, only certificate A satisfies MFA, and credential B satisfies only single-factor authentication. If the user used derived credential during sign-in and was configured to have MFA, the user is asked for a second factor for successful authentication. +1. Policy OID rules take precedence over certificate issuer rules. If a certificate has both policy OID and Issuer, the policy OID is always checked first, and if no policy rule is found then the issuer subject bindings are checked. Policy OID has a higher strong authentication binding priority than the issuer. 1. If one CA binds to MFA, all user certificates that the CA issues qualify as MFA. The same logic applies for single-factor authentication. 1. If one policy OID binds to MFA, all user certificates that include this policy OID as one of the OIDs (A user certificate could have multiple policy OIDs) qualify as MFA. 1. If there's a conflict between multiple policy OIDs (such as when a certificate has two policy OIDs, where one binds to single-factor authentication and the other binds to MFA) then treat the certificate as a single-factor authentication. The username binding policy helps validate the certificate of the user. By defau ### Achieve higher security with certificate bindings -There are four supported methods. In general, mapping types are considered high-affinity if they're based on identifiers that you can't reuse (Such as Subject Key Identifiers or SHA1 Public Key). These identifiers convey a higher assurance that only a single certificate can be used to authenticate the respective user. Therefore, all mapping types based on usernames and email addresses are considered low-affinity. Therefore, Microsoft Entra ID implements two mappings considered low-affinity (based on reusable identifiers), and the other two are considered high-affinity bindings. For more information, see [certificateUserIds](concept-certificate-based-authentication-certificateuserids.md). +There are four supported methods for certificate bindings. In general, mapping types are considered high-affinity if they're based on identifiers that you can't reuse, such as Subject Key Identifiers or SHA1 Public Key. These identifiers convey a higher assurance that only a single certificate can be used to authenticate the respective user. All mapping types based on usernames and email addresses are considered low-affinity. Microsoft Entra ID implements two mappings considered low-affinity based on reusable identifiers. The other two are considered high-affinity bindings. For more information, see [certificateUserIds](concept-certificate-based-authentication-certificateuserids.md). -|Certificate mapping Field | Examples of values in certificateUserIds | User object attributes | Type | +|Certificate mapping field | Examples of values in certificateUserIds | User object attributes | Type | |--|--||-|-|PrincipalName | ΓÇ£X509:\<PN>bob@woodgrove.comΓÇ¥ | userPrincipalName <br> onPremisesUserPrincipalName <br> certificateUserIds | low-affinity | -|RFC822Name | ΓÇ£X509:\<RFC822>user@woodgrove.comΓÇ¥ | userPrincipalName <br> onPremisesUserPrincipalName <br> certificateUserIds | low-affinity | -|X509SKI | ΓÇ£X509:\<SKI>123456789abcdefΓÇ¥| certificateUserIds | high-affinity | -|X509SHA1PublicKey |ΓÇ£X509:\<SHA1-PUKEY>123456789abcdefΓÇ¥ | certificateUserIds | high-affinity | +|PrincipalName | X509:\<PN>bob@woodgrove.com | userPrincipalName <br> onPremisesUserPrincipalName <br> certificateUserIds | low-affinity | +|RFC822Name | X509:\<RFC822>user@woodgrove.com | userPrincipalName <br> onPremisesUserPrincipalName <br> certificateUserIds | low-affinity | +|X509SKI | X509:\<SKI>123456789abcdef| certificateUserIds | high-affinity | +|X509SHA1PublicKey |X509:\<SHA1-PUKEY>123456789abcdef | certificateUserIds | high-affinity | <a name='how-azure-ad-resolves-multiple-username-policy-binding-rules'></a> There are four supported methods. In general, mapping types are considered high- Use the highest priority (lowest number) binding. 1. Look up the user object by using the username or User Principal Name.-1. If the X.509 certificate field is on the presented certificate, Microsoft Entra ID will match the value in the certificate field to the user object attribute value. +1. If the X.509 certificate field is on the presented certificate, Microsoft Entra ID matches the value in the certificate field to the user object attribute value. 1. If a match is found, user authentication is successful. 1. If a match isn't found, move to the next priority binding. 1. If the X.509 certificate field isn't on the presented certificate, move to the next priority binding. Use the highest priority (lowest number) binding. Each of the Microsoft Entra attributes (userPrincipalName, onPremiseUserPrincipalName, certificateUserIds) available to bind certificates to Microsoft Entra user accounts has unique constraint to ensure a certificate only matches a single Microsoft Entra user account. However, Microsoft Entra CBA does support configuring multiple binding methods in the username binding policy. This allows an administrator to accommodate multiple certificate configurations. However the combination of some methods can also potentially permit one certificate to match to multiple Microsoft Entra user accounts. >[!IMPORTANT]->When using multiple bindings, Microsoft Entra CBA authentication is only as secure as your low-affinity binding as Microsoft Entra CBA will validate each of the bindings to authenticate the user. In order to eliminate a scenario where a single certificate matching multiple Microsoft Entra accounts, the tenant administrator should: +>When using multiple bindings, Microsoft Entra CBA authentication is only as secure as your low-affinity binding as Microsoft Entra CBA validates each of the bindings to authenticate the user. In order to eliminate a scenario where a single certificate matching multiple Microsoft Entra accounts, the tenant administrator should: >- Configure a single binding method in the username binding policy. >- If a tenant has multiple binding methods configured and doesn't want to allow one certificate to multiple accounts, the tenant admin must ensure all allowable methods configured in the policy map to the same Microsoft Entra account, i.e all user accounts should have values matching all the bindings. >- If a tenant has multiple binding methods configured, the admin should make sure that they do not have more than one low-affinity binding Microsoft Entra ID downloads and caches the customers certificate revocation lis An admin can configure the CRL distribution point during the setup process of the trusted issuers in the Microsoft Entra tenant. Each trusted issuer should have a CRL that can be referenced by using an internet-facing URL. >[!IMPORTANT]->The maximum size of a CRL for Microsoft Entra ID to successfully download on an interactive sign-in and cache is 20 MB in Azure Global and 45 MB in Azure US Government clouds, and the time required to download the CRL must not exceed 10 seconds. If Microsoft Entra ID can't download a CRL, certificate-based authentications using certificates issued by the corresponding CA will fail. As a best practice to keep CRL files within size limits, keep certificate lifetimes within reasonable limits and to clean up expired certificates. For more information, see [Is there a limit for CRL size?](certificate-based-authentication-faq.yml#is-there-a-limit-for-crl-size-). +>The maximum size of a CRL for Microsoft Entra ID to successfully download on an interactive sign-in and cache is 20 MB in Azure Global and 45 MB in Azure US Government clouds, and the time required to download the CRL must not exceed 10 seconds. If Microsoft Entra ID can't download a CRL, certificate-based authentications using certificates issued by the corresponding CA fail. As a best practice to keep CRL files within size limits, keep certificate lifetimes within reasonable limits and to clean up expired certificates. For more information, see [Is there a limit for CRL size?](certificate-based-authentication-faq.yml#is-there-a-limit-for-crl-size-). -When a user performs an interactive sign-in with a certificate, and the CRL exceeds the interactive limit for a cloud, their initial sign-in will fail with the following error: +When a user performs an interactive sign-in with a certificate, and the CRL exceeds the interactive limit for a cloud, their initial sign-in fails with the following error: "The Certificate Revocation List (CRL) downloaded from {uri} has exceeded the maximum allowed size ({size} bytes) for CRLs in Microsoft Entra ID. Try again in few minutes. If the issue persists, contact your tenant administrators." -After the error, Microsoft Entra ID will attempt to download the CRL subject to the service-side limits (45 MB in Azure Global and 150 MB in Azure US Government clouds). +After the error, Microsoft Entra ID attempts to download the CRL subject to the service-side limits (45 MB in Azure Global and 150 MB in Azure US Government clouds). >[!IMPORTANT]->If the admin skips the configuration of the CRL, Microsoft Entra ID will not perform any CRL checks during the certificate-based authentication of the user. This can be helpful for initial troubleshooting, but shouldn't be considered for production use. +>If the admin skips the configuration of the CRL, Microsoft Entra ID doesn't perform any CRL checks during the certificate-based authentication of the user. This can be helpful for initial troubleshooting, but shouldn't be considered for production use. As of now, we don't support Online Certificate Status Protocol (OCSP) because of performance and reliability reasons. Instead of downloading the CRL at every connection by the client browser for OCSP, Microsoft Entra ID downloads once at the first sign-in and caches it, thereby improving the performance and reliability of CRL verification. We also index the cache so the search is much faster every time. Customers must publish CRLs for certificate revocation. The following steps are a typical flow of the CRL check: -1. Microsoft Entra ID will attempt to download the CRL at the first sign-in event of any user with a certificate of the corresponding trusted issuer or certificate authority. -1. Microsoft Entra ID will cache and re-use the CRL for any subsequent usage. It will honor the **Next update date** and, if available, **Next CRL Publish date** (used by Windows Server CAs) in the CRL document. -1. The user certificate-based authentication will fail if: +1. Microsoft Entra ID attempts to download the CRL at the first sign-in event of any user with a certificate of the corresponding trusted issuer or certificate authority. +1. Microsoft Entra ID caches and re-uses the CRL for any subsequent usage. It honors the **Next update date** and, if available, **Next CRL Publish date** (used by Windows Server CAs) in the CRL document. +1. The user certificate-based authentication fails if: - A CRL has been configured for the trusted issuer and Microsoft Entra ID can't download the CRL, due to availability, size, or latency constraints. - The user's certificate is listed as revoked on the CRL. :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/user-cert.png" alt-text="Screenshot of the revoked user certificate in the CRL." ::: - - Microsoft Entra ID will attempt to download a new CRL from the distribution point if the cached CRL document is expired. + - Microsoft Entra ID attempts to download a new CRL from the distribution point if the cached CRL document is expired. >[!NOTE]->Microsoft Entra ID will check the CRL of the issuing CA and other CAs in the PKI trust chain up to the root CA. We have a limit of up to 10 CAs from the leaf client certificate for CRL validation in the PKI chain. The limitation is to make sure a bad actor will not bring down the service by uploading a PKI chain with a huge number of CAs with a bigger CRL size. -If the tenantΓÇÖs PKI chain has more than 5 CAs and in case of a CA compromise, the administrator should remove the compromised trusted issuer from the Microsoft Entra tenant configuration. +>Microsoft Entra ID checks the CRL of the issuing CA and other CAs in the PKI trust chain up to the root CA. We have a limit of up to 10 CAs from the leaf client certificate for CRL validation in the PKI chain. The limitation is to make sure a bad actor doesn't bring down the service by uploading a PKI chain with a huge number of CAs with a bigger CRL size. +If the tenant's PKI chain has more than 5 CAs and in case of a CA compromise, the administrator should remove the compromised trusted issuer from the Microsoft Entra tenant configuration. >[!IMPORTANT] Certificate-based authentication can fail for reasons such as the certificate be :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/validation-error.png" alt-text="Screenshot of a certificate validation error." ::: -If CBA fails on a browser, even if the failure is because you cancel the certificate picker, you need to close the browser session and open a new session to try CBA again. A new session is required because browsers cache the certificate. When CBA is re-tried, the browser will send the cached certificate during the TLS challenge, which causes sign-in failure and the validation error. +If CBA fails on a browser, even if the failure is because you cancel the certificate picker, you need to close the browser session and open a new session to try CBA again. A new session is required because browsers cache the certificate. When CBA is re-tried, the browser sends the cached certificate during the TLS challenge, which causes sign-in failure and the validation error. Click **More details** to get logging information that can be sent to an administrator, who in turn can get more information from the Sign-in logs. Click **Other ways to sign in** to try other methods available to the user to si ## Certificate-based authentication in MostRecentlyUsed (MRU) methods -Once a user authenticates successfully using CBA, the user's MostRecentlyUsed (MRU) authentication method will be set to CBA. Next time, when the user enters their UPN and clicks **Next**, the user will be taken to the CBA method directly, and need not select **Use the certificate or smart card**. +Once a user authenticates successfully using CBA, the user's MostRecentlyUsed (MRU) authentication method is set to CBA. Next time, when the user enters their UPN and clicks **Next**, the user is taken to the CBA method directly, and need not select **Use the certificate or smart card**. To reset the MRU method, the user needs to cancel the certificate picker, click **Other ways to sign in**, and select another method available to the user and authenticate successfully. |
active-directory | Concept Mfa Authprovider | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-mfa-authprovider.md | Title: Microsoft Entra multifactor authenticationentication Providers -description: When should you use an Auth Provider with Azure MFA? + Title: Microsoft Entra multifactor authentication providers +description: When should you use an authentication provider with Microsoft Entra multifactor authentication (MFA)? -A Microsoft Entra multifactor authenticationentication Provider is used to take advantage of features provided by Microsoft Entra multifactor authentication for users who **do not have licenses**. +A Microsoft Entra multifactor authentication provider is used to take advantage of features provided by Microsoft Entra multifactor authentication for users who **do not have licenses**. ## Caveats related to the Azure MFA SDK |
active-directory | Concept Password Ban Bad Combined Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-password-ban-bad-combined-policy.md | The following Microsoft Entra password policy requirements apply for all passwor ## Password expiration policies -Password expiration policies are unchanged but they're included in this topic for completeness. A *Global Administrator* or *User Administrator* can use the [Microsoft Entra Module for Windows PowerShell](/powershell/module/Azuread/) to set user passwords not to expire. +Password expiration policies are unchanged but they're included in this topic for completeness. A *Global Administrator* or *User Administrator* can use the [Azure AD Module for PowerShell](/powershell/module/Azuread/) to set user passwords not to expire. > [!NOTE] > By default, only passwords for user accounts that aren't synchronized through Microsoft Entra Connect can be configured to not expire. For more information about directory synchronization, see [Connect AD with Microsoft Entra ID](../hybrid/connect/how-to-connect-password-hash-synchronization.md#password-expiration-policy). The following expiration requirements apply to other providers that use Microsof | Property | Requirements | | | |-| Password expiry duration (Maximum password age) |Default value: **90** days.<br>The value is configurable by using the `Set-MsolPasswordPolicy` cmdlet from the Microsoft Entra Module for Windows PowerShell. | +| Password expiry duration (Maximum password age) |Default value: **90** days.<br>The value is configurable by using the `Set-MsolPasswordPolicy` cmdlet from the Azure AD PowerShell module. | | Password expiry (Let passwords never expire) |Default value: **false** (indicates that password's have an expiration date).<br>The value can be configured for individual user accounts by using the `Set-MsolUser` cmdlet.| ## Next steps |
active-directory | Concept Sspr Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-policy.md | The following Microsoft Entra password policy options are defined. Unless noted, | Characters allowed |A ΓÇô Z<br>a - z<br>0 ΓÇô 9<br>@ # $ % ^ & * - _ ! + = [ ] { } | \ : ' , . ? / \` ~ " ( ) ; < ><br>Blank space | | Characters not allowed | Unicode characters | | Password restrictions |A minimum of 8 characters and a maximum of 256 characters.<br>Requires three out of four of the following types of characters:<br>- Lowercase characters<br>- Uppercase characters<br>- Numbers (0-9)<br>- Symbols (see the previous password restrictions) |-| Password expiry duration (Maximum password age) |Default value: **90** days. If the tenant was created after 2021, it has no default expiration value. You can check current policy with [Get-MsolPasswordPolicy](/powershell/module/msonline/get-msolpasswordpolicy).<br>The value is configurable by using the `Set-MsolPasswordPolicy` cmdlet from the Microsoft Entra Module for Windows PowerShell.| +| Password expiry duration (Maximum password age) |Default value: **90** days. If the tenant was created after 2021, it has no default expiration value. You can check current policy with [Get-MsolPasswordPolicy](/powershell/module/msonline/get-msolpasswordpolicy).<br>The value is configurable by using the `Set-MsolPasswordPolicy` cmdlet from the Azure AD module for PowerShell.| | Password expiry (Let passwords never expire) |Default value: **false** (indicates that passwords have an expiration date).<br>The value can be configured for individual user accounts by using the `Set-MsolUser` cmdlet. | | Password change history | The last password *can't* be used again when the user changes a password. | | Password reset history | The last password *can* be used again when the user resets a forgotten password. | A one-gate policy requires one piece of authentication data, such as an email ad ## Password expiration policies -A *Global Administrator* or *User Administrator* can use the [Microsoft Entra Module for Windows PowerShell](/powershell/module/Azuread/) to set user passwords not to expire. +A *Global Administrator* or *User Administrator* can use the [Azure Active Directory module for PowerShell](/powershell/module/Azuread/) to set user passwords not to expire. You can also use PowerShell cmdlets to remove the never-expires configuration or to see which user passwords are set to never expire. |
active-directory | Concepts Azure Multi Factor Authentication Prompts Session Lifetime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md | To configure or review the *Remain signed-in* option, complete the following ste To remember multifactor authentication settings on trusted devices, complete the following steps: 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator).-1. Browse to **Protection** > then **multifactor authentication**. +1. Browse to **Protection** > **Multifactor authentication**. 1. Under **Configure**, select **Additional cloud-based MFA settings**. 1. In the *multifactor authentication service settings* page, scroll to **remember multifactor authentication settings**. Disable the setting by unchecking the checkbox. |
active-directory | Feature Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/feature-availability.md | This following tables list Microsoft Entra feature availability in Azure Governm | HR-provisioning app | Availability | |-|:--:|-|Workday to Microsoft Entra User Provisioning | ✅ | +|Workday to Microsoft Entra user provisioning | ✅ | |Workday Writeback | ✅ |-|SuccessFactors to Microsoft Entra User Provisioning | ✅ | +|SuccessFactors to Microsoft Entra user provisioning | ✅ | |SuccessFactors to Writeback | ✅ |-|Provisioning agent configuration and registration with Gov cloud tenant| Works with special undocumented command-line invocation:<br> AADConnectProvisioningAgent.Installer.exe ENVIRONMENTNAME=AzureUSGovernment | +|Provisioning agent configuration and registration with Gov cloud tenant| Works with special undocumented command-line invocation:<br> `AADConnectProvisioningAgent.Installer.exe ENVIRONMENTNAME=AzureUSGovernment` | |
active-directory | How To Certificate Based Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-certificate-based-authentication.md | To enable Microsoft Entra CBA and configure user bindings in the Microsoft Entra :::image type="content" border="true" source="./media/how-to-certificate-based-authentication/policy.png" alt-text="Screenshot of Authentication policy."::: 1. Click **Configure** to set up authentication binding and username binding.-1. The protection level attribute has a default value of **Single-factor authentication**. Select **multifactor authentication** to change the default value to MFA. +1. The protection level attribute has a default value of **Single-factor authentication**. Select **Multifactor authentication** to change the default value to MFA. >[!NOTE] >The default protection level value will be in effect if no custom rules are added. If custom rules are added, the protection level defined at the rule level will be honored instead. To enable Microsoft Entra CBA and configure user bindings in the Microsoft Entra To create a rule by certificate issuer, click **Certificate issuer**. 1. Select a **Certificate issuer identifier** from the list box.- 1. Click **multifactor authentication**. + 1. Click **Multifactor authentication**. :::image type="content" border="true" source="./media/how-to-certificate-based-authentication/multifactor-issuer.png" alt-text="Screenshot of multifactor authentication policy."::: To create a rule by Policy OID, click **Policy OID**. 1. Enter a value for **Policy OID**.- 1. Click **multifactor authentication**. + 1. Click **Multifactor authentication**. :::image type="content" border="true" source="./media/how-to-certificate-based-authentication/multifactor-policy-oid.png" alt-text="Screenshot of mapping to Policy OID."::: |
active-directory | How To Mfa Registration Campaign | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-registration-campaign.md | To enable a registration campaign in the Microsoft Entra admin center, complete 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator) or [Global Administrator](../roles/permissions-reference.md#global-administrator). 1. Browse to **Protection** > **Authentication methods** > **Registration campaign** and click **Edit**.-1. For **State**, click **Microsoft managed** or **Enabled**. In the following screenshot, the registration campaign is **Microsoft managed**. That setting allows Microsoft to set the default value to be either Enabled or Disabled. For the registration campaign, the Microsoft managed value is Enabled for voice call and text message users with free and trial subscriptions. For more information, see [Protecting authentication methods in Microsoft Entra ID](concept-authentication-default-enablement.md). +1. For **State**, click **Microsoft managed** or **Enabled**. In the following screenshot, the registration campaign is **Microsoft managed**. That setting allows Microsoft to set the default value to be either Enabled or Disabled. From Sept. 25 to Oct. 20, 2023, the Microsoft managed value for the registration campaing will change to **Enabled** for voice call and text message users across all tenants. For more information, see [Protecting authentication methods in Azure Active Directory](concept-authentication-default-enablement.md). :::image type="content" border="true" source="media/how-to-mfa-registration-campaign/admin-experience.png" alt-text="Screenshot of enabling a registration campaign."::: The following table lists **authenticationMethodsRegistrationCampaign** properti ||--|-| |snoozeDurationInDays|Range: 0 - 14|Defines the number of days before the user is nudged again.<br>If the value is 0, the user is nudged during every MFA attempt.<br>Default: 1 day| |enforceRegistrationAfterAllowedSnoozes|"true"<br>"false"|Dictates whether a user is required to perform setup after 3 snoozes.<br>If true, user is required to register.<br>If false, user can snooze indefinitely.<br>Default: true<br>Please note this property only comes into effect once the Microsoft managed value for the registration campaign will change to Enabled for text message and voice call for your organization.|-|state|"enabled"<br>"disabled"<br>"default"|Allows you to enable or disable the feature.<br>Default value is used when the configuration hasn't been explicitly set and will use Microsoft Entra ID default value for this setting. Currently maps to disabled.<br>Change states to either enabled or disabled as needed.| +|state|"enabled"<br>"disabled"<br>"default"|Allows you to enable or disable the feature.<br>Default value is used when the configuration hasn't been explicitly set and will use Microsoft Entra ID default value for this setting. From Sept. 25 to Oct. 20, 2023, the default state will change to enabled for voice call and text message users across all tenants.<br>Change state to enabled (for all users) or disabled as needed.| |excludeTargets|N/A|Allows you to exclude different users and groups that you want omitted from the feature. If a user is in a group that is excluded and a group that is included, the user will be excluded from the feature.| |includeTargets|N/A|Allows you to include different users and groups that you want the feature to target.| |
active-directory | Howto Authentication Passwordless Security Key On Premises | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key-on-premises.md | The [`AzureADHybridAuthenticationManagement` module](https://www.powershellgalle # First, ensure TLS 1.2 for PowerShell gallery access. [Net.ServicePointManager]::SecurityProtocol = [Net.ServicePointManager]::SecurityProtocol -bor [Net.SecurityProtocolType]::Tls12 - # Install the Azure AD Kerberos PowerShell Module. + # Install the AzureADHybridAuthenticationManagement PowerShell module. Install-Module -Name AzureADHybridAuthenticationManagement -AllowClobber ``` |
active-directory | Howto Mfa App Passwords | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-app-passwords.md | By default, users can't create app passwords. The app passwords feature must be 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator). 1. Browse to **Conditional Access** > **Named locations**. 5. Click on **"Configure MFA trusted IPs"** in the bar across the top of the *Conditional Access | Named Locations* window.-6. On the **multifactor authentication** page, select the **Allow users to create app passwords to sign in to non-browser apps** option. +6. On the **Multifactor authentication** page, select the **Allow users to create app passwords to sign in to non-browser apps** option. ![Screenshot that shows the service settings for multifactor authentication to allow the user of app passwords](media/concept-authentication-methods/app-password-authentication-method.png) |
active-directory | Howto Mfa Mfasettings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-mfasettings.md | The following settings are available: To configure account lockout settings, complete these steps: 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator).-1. Browse to **Protection** > **multifactor authentication** > **Account lockout**. +1. Browse to **Protection** > **Multifactor authentication** > **Account lockout**. 1. Enter the values for your environment, and then select **Save**. ![Screenshot that shows the account lockout settings.](./media/howto-mfa-mfasettings/account-lockout-settings.png) When a user reports a MFA prompt as suspicious, the event shows up in the Sign-i - To view the risk detections report, select **Protection** > **Identity Protection** > **Risk detection**. The risk event is part of the standard **Risk Detections** report, and will appear as Detection Type **User Reported Suspicious Activity**, Risk level **High**, Source **End user reported**. -- To view fraud reports in the Sign-ins report, select **Identity** > **Monitoring & health** > **Sign-in logs** > **Authentication Details**. The fraud report is part of the standard **Azure AD Sign-ins** report and appears in the Result Detail as MFA denied, Fraud Code Entered. +- To view fraud reports in the Sign-ins report, select **Identity** > **Monitoring & health** > **Sign-in logs** > **Authentication Details**. The fraud report is part of the standard **Microsoft Entra sign-ins** report and appears in the Result Detail as MFA denied, Fraud Code Entered. - To view fraud reports in the Audit logs, select **Identity** > **Monitoring & health** > **Audit logs**. The fraud report appears under Activity type Fraud reported - user is blocked for MFA or Fraud reported - no action taken based on the tenant-level settings for fraud report. You can configure Microsoft Entra ID to send email notifications when users repo To configure fraud alert notifications: -1. Go to **Protection** > **Multi-Factor Authentication** > **Notifications**. +1. Go to **Protection** > **Multifactor authentication** > **Notifications**. 1. Enter the email address to send the notification to. 1. To remove an existing email address, select **...** next to the email address, and then select **Delete**. 1. Select **Save**. Helga@contoso.com,1234567,1234567abcdef1234567abcdef,60,Contoso,HardwareKey > [!NOTE] > Be sure to include the header row in your CSV file. -An Authentication Policy Administrator can sign in to the [Microsoft Entra admin center](https://entra.microsoft.com), go to **Protection** > **multifactor authentication** > **OATH tokens**, and upload the CSV file. +An Authentication Policy Administrator can sign in to the [Microsoft Entra admin center](https://entra.microsoft.com), go to **Protection** > **Multifactor authentication** > **OATH tokens**, and upload the CSV file. Depending on the size of the CSV file, it might take a few minutes to process. Select **Refresh** to get the status. If there are any errors in the file, you can download a CSV file that lists them. The field names in the downloaded CSV file are different from those in the uploaded version. To use your own custom messages, complete the following steps: Settings for app passwords, trusted IPs, verification options, and remembering multifactor authentication on trusted devices are available in the service settings. This is a legacy portal. -You can access service settings from the [Microsoft Entra admin center](https://entra.microsoft.com) by going to **Protection** > **multifactor authentication** > **Getting started** > **Configure** > **Additional cloud-based MFA settings**. A window or tab opens with additional service settings options. +You can access service settings from the [Microsoft Entra admin center](https://entra.microsoft.com) by going to **Protection** > **Multifactor authentication** > **Getting started** > **Configure** > **Additional cloud-based MFA settings**. A window or tab opens with additional service settings options. ### Trusted IPs To enable trusted IPs by using Conditional Access policies, complete the followi If you don't want to use Conditional Access policies to enable trusted IPs, you can configure the service settings for Microsoft Entra multifactor authentication by using the following steps: 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator).-1. Browse to **Protection** > **multifactor authentication** > **Service settings**. +1. Browse to **Protection** > **Multifactor authentication** > **Service settings**. 1. On the **Service settings** page, under **Trusted IPs**, choose one or both of the following options: * **For requests from federated users on my intranet**: To choose this option, select the checkbox. All federated users who sign in from the corporate network bypass multifactor authentication by using a claim that's issued by AD FS. Ensure that AD FS has a rule to add the intranet claim to the appropriate traffic. If the rule doesn't exist, create the following rule in AD FS: To enable or disable verification methods, complete the following steps: 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator). 1. Browse to **Identity** > **Users**. 1. Select **Per-user MFA**.-1. Under **multifactor authentication** at the top of the page, select **Service settings**. +1. Under **Multifactor authentication** at the top of the page, select **Service settings**. 1. On the **Service settings** page, under **Verification options**, select or clear the appropriate checkboxes. 1. Select **Save**. To enable and configure the option to allow users to remember their MFA status a 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator). 1. Browse to **Identity** > **Users**. 1. Select **Per-user MFA**.-1. Under **multifactor authentication** at the top of the page, select **service settings**. +1. Under **Multifactor authentication** at the top of the page, select **service settings**. 1. On the **service settings** page, under **remember multifactor authentication**, select **Allow users to remember multifactor authentication on devices they trust**. 1. Set the number of days to allow trusted devices to bypass multifactor authentications. For the optimal user experience, extend the duration to 90 or more days. 1. Select **Save**. |
active-directory | Howto Mfa Nps Extension Rdg | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-rdg.md | Install the NPS extension on a server that has the Network Policy and Access Ser ### Configure certificates for use with the NPS extension using a PowerShell script -Next, you need to configure certificates for use by the NPS extension to ensure secure communications and assurance. The NPS components include a Windows PowerShell script that configures a self-signed certificate for use with NPS. +Next, you need to configure certificates for use by the NPS extension to ensure secure communications and assurance. The NPS components include a PowerShell script that configures a self-signed certificate for use with NPS. The script performs the following actions: Once you have successfully authenticated using the secondary authentication meth ### View Event Viewer logs for successful logon events -To view the successful sign-in events in the Windows Event Viewer logs, you can issue the following Windows PowerShell command to query the Windows Terminal Services and Windows Security logs. +To view the successful sign-in events in the Windows Event Viewer logs, you can issue the following PowerShell command to query the Windows Terminal Services and Windows Security logs. To query successful sign-in events in the Gateway operational logs _(Event Viewer\Applications and Services Logs\Microsoft\Windows\TerminalServices-Gateway\Operational)_, use the following PowerShell commands: |
active-directory | Howto Mfa Nps Extension Vpn | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-vpn.md | The NPS extension requires Windows Server 2008 R2 SP1 or later, with the Network The following libraries are installed automatically with the NPS extension: -- [Visual C++ Redistributable Packages for Visual Studio 2013 (X64)](https://www.microsoft.com/download/details.aspx?id=40784)-- [Azure AD PowerShell Module for Windows PowerShell version 1.1.166.0](https://connect.microsoft.com/site1164/Downloads/DownloadDetails.aspx?DownloadID=59185)+- [Visual C++ Redistributable Packages for Visual Studio 2013 (X64)](https://www.microsoft.com/download/details.aspx?id=40784) +- [Azure AD PowerShell module version 1.1.166.0](https://connect.microsoft.com/site1164/Downloads/DownloadDetails.aspx?DownloadID=59185) -If the Microsoft Azure Active Directory PowerShell Module is not already present, it is installed with a configuration script that you run as part of the setup process. There is no need to install the module ahead of time if it is not already installed. +If the Azure Active Directory PowerShell module is not already present, it is installed with a configuration script that you run as part of the setup process. There is no need to install the module ahead of time if it is not already installed. ### Azure Active Directory synced with on-premises Active Directory The NPS extension must be installed on a server that has the Network Policy and ### Configure certificates for use with the NPS extension by using a PowerShell script -To ensure secure communications and assurance, configure certificates for use by the NPS extension. The NPS components include a Windows PowerShell script that configures a self-signed certificate for use with NPS. +To ensure secure communications and assurance, configure certificates for use by the NPS extension. The NPS components include a PowerShell script that configures a self-signed certificate for use with NPS. The script performs the following actions: |
active-directory | Howto Mfa Nps Extension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension.md | You need to manually install the following library: The following libraries are installed automatically with the extension. - [Visual C++ Redistributable Packages for Visual Studio 2013 (X64)](https://www.microsoft.com/download/details.aspx?id=40784)-- [Azure AD PowerShell Module for Windows PowerShell version 1.1.166.0](https://www.powershellgallery.com/packages/MSOnline/1.1.166.0)+- [PowerShell module version 1.1.166.0](https://www.powershellgallery.com/packages/MSOnline/1.1.166.0) -The Azure AD PowerShell Module for Windows PowerShell is also installed through a configuration script you run as part of the setup process, if not already present. There's no need to install this module ahead of time if it's not already installed. +The PowerShell module is also installed through a configuration script you run as part of the setup process, if not already present. There's no need to install this module ahead of time if it's not already installed. ### Obtain the directory tenant ID If you need to create and configure a test account, use the following steps: 1. Sign in to [https://aka.ms/mfasetup](https://aka.ms/mfasetup) with a test account. 2. Follow the prompts to set up a verification method. 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator).-1. Browse to **Protection** > **multifactor authentication** and enable for the test account. +1. Browse to **Protection** > **Multifactor authentication** and enable for the test account. > [!IMPORTANT] > |
active-directory | Howto Mfa Userdevicesettings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-userdevicesettings.md | To delete a user's app passwords, complete the following steps: 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Authentication Administrator](../roles/permissions-reference.md#authentication-administrator). 1. Browse to **Identity** > **Users** > **All users**. -1. Select **multifactor authentication**. You may need to scroll to the right to see this menu option. Select the example screenshot below to see the full window and menu location: +1. Select **Multifactor authentication**. You may need to scroll to the right to see this menu option. Select the example screenshot below to see the full window and menu location: [![Select multifactor authentication from the Users window in Azure AD.](media/howto-mfa-userstates/selectmfa-cropped.png)](media/howto-mfa-userstates/selectmfa.png#lightbox) 1. Check the box next to the user or users that you wish to manage. A list of quick step options appears on the right. 1. Select **Manage user settings**, then check the box for **Delete all existing app passwords generated by the selected users**, as shown in the following example: |
active-directory | Howto Mfaserver Adfs Windows Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-adfs-windows-server.md | Before you begin, be aware of the following information: `C:\Program Files\Multi-Factor Authentication Server\Register-MultiFactorAuthenticationAdfsAdapter.ps1` -12. To use your newly registered adapter, edit the global authentication policy in AD FS. In the AD FS management console, go to the **Authentication Policies** node. In the **multifactor authentication** section, click the **Edit** link next to the **Global Settings** section. In the **Edit Global Authentication Policy** window, select **multifactor authentication** as an additional authentication method, and then click **OK**. The adapter is registered as WindowsAzureMultiFactorAuthentication. Restart the AD FS service for the registration to take effect. +12. To use your newly registered adapter, edit the global authentication policy in AD FS. In the AD FS management console, go to the **Authentication Policies** node. In the **Multifactor authentication** section, click the **Edit** link next to the **Global Settings** section. In the **Edit Global Authentication Policy** window, select **Multifactor authentication** as an additional authentication method, and then click **OK**. The adapter is registered as WindowsAzureMultiFactorAuthentication. Restart the AD FS service for the registration to take effect. ![Edit global authentication policy](./media/howto-mfaserver-adfs-2012/global.png) |
active-directory | Howto Mfaserver Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-deploy.md | Follow these steps to download the Microsoft Entra multifactor authentication Se > Existing customers that activated MFA Server before July 1, 2019 can download the latest version, future updates, and generate activation credentials as usual. The following steps only work if you were an existing MFA Server customer. 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a [Global Administrator](../roles/permissions-reference.md#global-administrator).-1. Browse to **Protection** > **multifactor authentication** > **Server settings**. +1. Browse to **Protection** > **Multifactor authentication** > **Server settings**. 4. Select **Download** and follow the instructions on the download page to save the installer. ![Download MFA Server](./media/howto-mfaserver-deploy/downloadportal.png) |
active-directory | Howto Password Smart Lockout | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-password-smart-lockout.md | When using [pass-through authentication](../hybrid/connect/how-to-connect-pta.md For example, if you want your Microsoft Entra smart lockout duration to be higher than AD DS, then Microsoft Entra ID would be 120 seconds (2 minutes) while your on-premises AD is set to 1 minute (60 seconds). If you want your Microsoft Entra lockout threshold to be 5, then you want your on-premises AD DS lockout threshold to be 10. This configuration would ensure smart lockout prevents your on-premises AD DS accounts from being locked out by brute force attacks on your Microsoft Entra accounts. > [!IMPORTANT]-> An administrator can unlock the users' cloud account if they have been locked out by the Smart Lockout capability, without the need of waiting for the lockout duration to expire. For more information, see [Reset a user's password using Azure Active Directory](../fundamentals/users-reset-password-azure-portal.md). +> An administrator can unlock the users' cloud account if they have been locked out by the Smart Lockout capability, without the need of waiting for the lockout duration to expire. For more information, see [Reset a user's password using Microsoft Entra ID](../fundamentals/users-reset-password-azure-portal.md). ## Verify on-premises account lockout policy |
active-directory | Tutorial Enable Sspr Writeback | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-enable-sspr-writeback.md | To enable SSPR writeback, first enable the writeback option in Microsoft Entra C 1. Sign in to your Microsoft Entra Connect server and start the **Microsoft Entra Connect** configuration wizard. 1. On the **Welcome** page, select **Configure**. 1. On the **Additional tasks** page, select **Customize synchronization options**, and then select **Next**.-1. On the **Connect to Microsoft Entra ID** page, enter a global administrator credential for your Azure tenant, and then select **Next**. +1. On the **Connect to Microsoft Entra ID** page, enter a Global Administrator credential for your Azure tenant, and then select **Next**. 1. On the **Connect directories** and **Domain/OU** filtering pages, select **Next**. 1. On the **Optional features** page, select the box next to **Password writeback** and select **Next**. With password writeback enabled in Microsoft Entra Connect, now configure Micros To enable password writeback in SSPR, complete the following steps: -1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as [Global Administrator](../roles/permissions-reference.md#global-administrator). 1. Browse to **Protection** > **Password reset**, then choose **On-premises integration**. 1. Check the option for **Write back passwords to your on-premises directory** . 1. (optional) If Microsoft Entra Connect provisioning agents are detected, you can additionally check the option for **Write back passwords with Microsoft Entra Connect cloud sync**. To enable password writeback in SSPR, complete the following steps: If you no longer want to use the SSPR writeback functionality you have configured as part of this tutorial, complete the following steps: -1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as [Global Administrator](../roles/permissions-reference.md#global-administrator). 1. Browse to **Protection** > **Password reset**, then choose **On-premises integration**. 1. Uncheck the option for **Write back passwords to your on-premises directory**. 1. Uncheck the option for **Write back passwords with Microsoft Entra Connect cloud sync**. If you no longer want to use the SSPR writeback functionality you have configure 1. When ready, select **Save**. If you no longer want to use the Microsoft Entra Connect cloud sync for SSPR writeback functionality but want to continue using Microsoft Entra Connect Sync agent for writebacks complete the following steps:-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as [Global Administrator](../roles/permissions-reference.md#global-administrator). 1. Browse to **Protection** > **Password reset**, then choose **On-premises integration**. 1. Uncheck the option for **Write back passwords with Microsoft Entra Connect cloud sync**. 1. When ready, select **Save**. |
active-directory | Tutorial Risk Based Sspr Mfa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-risk-based-sspr-mfa.md | Microsoft Entra ID Protection includes a default policy that can help get users It's recommended to enable the MFA registration policy for users that are to be enabled for additional Microsoft Entra ID Protection policies. To enable this policy, complete the following steps: 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator).-1. Browse to **Protection** > **multifactor authentication** > **MFA registration policy**. +1. Browse to **Protection** > **Multifactor authentication** > **MFA registration policy**. 1. By default, the policy applies to *All users*. If desired, select **Assignments**, then choose the users or groups to apply the policy on. 1. Under *Controls*, select **Access**. Make sure the option for *Require Microsoft Entra multifactor authentication registration* is checked, then choose **Select**. 1. Set **Enforce Policy** to *On*, then select **Save**. |
active-directory | About Microsoft Identity Platform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/about-microsoft-identity-platform.md | -Many developers have previously worked with the Azure AD v1.0 platform to authenticate work and school accounts (provisioned by Azure AD) by requesting tokens from the Azure AD v1.0 endpoint, using Azure AD Authentication Library (ADAL), Azure portal for application registration and configuration, and the Microsoft Graph API for programmatic application configuration. +Many developers have previously worked with the Azure AD v1.0 platform to authenticate Microsoft work and school accounts by requesting tokens from the Azure AD v1.0 endpoint, using Azure AD Authentication Library (ADAL), Azure portal for application registration and configuration, and the Microsoft Graph API for programmatic application configuration. With the unified Microsoft identity platform (v2.0), you can write code once and authenticate any Microsoft identity into your application. For several platforms, the fully supported open-source Microsoft Authentication Library (MSAL) is recommended for use against the identity platform endpoints. MSAL is simple to use, provides great single sign-on (SSO) experiences for your users, helps you achieve high reliability and performance, and is developed using Microsoft Secure Development Lifecycle (SDL). When calling APIs, you can configure your application to take advantage of incremental consent, which allows you to delay the request for consent for more invasive scopes until the application's usage warrants this at runtime. MSAL also supports Azure Active Directory B2C, so your customers use their preferred social, enterprise, or local account identities to get single sign-on access to your applications and APIs. |
active-directory | Active Directory Authentication Libraries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/active-directory-authentication-libraries.md | Title: Azure Active Directory Authentication Libraries + Title: Azure Active Directory Authentication Library description: The Azure AD Authentication Library (ADAL) allows client application developers to easily authenticate users to cloud or on-premises Active Directory (AD) and then obtain access tokens for securing API calls. -# Azure Active Directory Authentication Libraries +# Azure Active Directory Authentication Library [!INCLUDE [active-directory-azuread-dev](../../../includes/active-directory-azuread-dev.md)] The Azure Active Directory Authentication Library (ADAL) v1.0 enables applicatio - Support for asynchronous method calls > [!NOTE]-> Looking for the Azure AD v2.0 libraries (MSAL)? Checkout the [MSAL library guide](../develop/reference-v2-libraries.md). +> Looking for the Azure AD v2.0 libraries? Checkout the [MSAL library guide](../develop/reference-v2-libraries.md). > [!WARNING] |
active-directory | Active Directory Devhowto Adal Error Handling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/active-directory-devhowto-adal-error-handling.md | In this article, we explore the specific cases for each platform supported by AD - **AcquireToken**: Client can attempt silent acquisition, but can also perform interactive requests that require sign-in. > [!TIP]-> It's a good idea to log all errors and exceptions when using ADAL and Azure AD. Logs are not only helpful for understanding the overall health of your application, but are also important when debugging broader problems. While your application may recover from certain errors, they may hint at broader design problems that require code changes in order to resolve. +> It's a good idea to log all errors and exceptions when using ADAL. Logs are not only helpful for understanding the overall health of your application, but are also important when debugging broader problems. While your application may recover from certain errors, they may hint at broader design problems that require code changes in order to resolve. > > When implementing the error conditions covered in this document, you should log the error code and description for the reasons discussed earlier. See the [Error and logging reference](#error-and-logging-reference) for examples of logging code. > window.Logging = { ## Related content -* [Azure AD Authentication Libraries][AAD-Auth-Libraries] -* [Azure AD Authentication Scenarios][AAD-Auth-Scenarios] -* [Integrating Applications with Azure AD Authentication][AAD-Integrating-Apps] +* [Azure AD Authentication Library][Auth-Libraries] +* [Authentication scenarios][Auth-Scenarios] +* [Register an application with the Microsoft identity platform][Integrating-Apps] Use the comments section that follows, to provide feedback and help us refine and shape our content. -[![Shows the "Sign in with Microsoft" button][AAD-Sign-In]][AAD-Sign-In] +[![Shows the "Sign in with Microsoft" button][Sign-In]][Sign-In] + <!--Reference style links --> -[AAD-Auth-Libraries]: ./active-directory-authentication-libraries.md -[AAD-Auth-Scenarios]:v1-authentication-scenarios.md -[AAD-Integrating-Apps]:../develop/quickstart-register-app.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json +[Auth-Libraries]: ./active-directory-authentication-libraries.md +[Auth-Scenarios]:v1-authentication-scenarios.md +[Integrating-Apps]:../develop/quickstart-register-app.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json <!--Image references-->-[AAD-Sign-In]:./media/active-directory-devhowto-multi-tenant-overview/sign-in-with-microsoft-light.png +[Sign-In]:./media/active-directory-devhowto-multi-tenant-overview/sign-in-with-microsoft-light.png |
active-directory | Concept Conditional Access Conditions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-conditions.md | This setting has an effect on access attempts made from the following mobile app - When administrators create a policy assigned to Exchange ActiveSync clients, **Exchange Online** should be the only cloud application assigned to the policy. - Administrators can narrow the scope of this policy to specific platforms using the **Device platforms** condition. -If the access control assigned to the policy uses **Require approved client app**, the user is directed to install and use the Outlook mobile client. In the case that **Multifactor Authentication**, **Terms of use**, or **custom controls** are required, affected users are blocked, because basic authentication doesnΓÇÖt support these controls. +If the access control assigned to the policy uses **Require approved client app**, the user is directed to install and use the Outlook mobile client. In the case that **Multifactor authentication**, **Terms of use**, or **custom controls** are required, affected users are blocked, because basic authentication doesnΓÇÖt support these controls. For more information, see the following articles: |
active-directory | Policy Migration Mfa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/policy-migration-mfa.md | -This article shows an example of how to migrate a classic policy that requires **multifactor authentication** for a cloud app. +This article shows an example of how to migrate a classic policy that requires **Multifactor authentication** for a cloud app. ![Classic policy details requiring MFA for Salesforce app](./media/policy-migration/33.png) |
active-directory | Troubleshoot Conditional Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/troubleshoot-conditional-access.md | -The information in this article can be used to troubleshoot unexpected sign-in outcomes related to Conditional Access using error messages and Microsoft Entra sign-ins log. +The information in this article can be used to troubleshoot unexpected sign-in outcomes related to Conditional Access using error messages and Microsoft Entra sign-in logs. ## Select "all" consequences To find out which Conditional Access policy or policies applied and why do the f 1. **Username** to see information related to specific users. 1. **Date** scoped to the time frame in question. - ![Screenshot showing selecting the Conditional Access filter in the sign-ins log.](./media/troubleshoot-conditional-access/image3.png) + ![Screenshot showing selecting the Conditional Access filter in the sign-in log.](./media/troubleshoot-conditional-access/image3.png) 1. Once the sign-in event that corresponds to the user's sign-in failure has been found select the **Conditional Access** tab. The Conditional Access tab shows the specific policy or policies that resulted in the sign-in interruption. 1. Information in the **Troubleshooting and support** tab may provide a clear reason as to why a sign-in failed such as a device that didn't meet compliance requirements. More information about error codes can be found in the article [Microsoft Entra In some specific scenarios, users are blocked because there are cloud apps with dependencies on resources blocked by Conditional Access policy. -To determine the service dependency, check the sign-ins log for the application and resource called by the sign-in. In the following screenshot, the application called is **Azure Portal** but the resource called is **Windows Azure Service Management API**. To target this scenario appropriately all the applications and resources should be similarly combined in Conditional Access policy. +To determine the service dependency, check the sign-in log for the application and resource called by the sign-in. In the following screenshot, the application called is **Azure Portal** but the resource called is **Windows Azure Service Management API**. To target this scenario appropriately all the applications and resources should be similarly combined in Conditional Access policy. :::image type="content" source="media/troubleshoot-conditional-access/service-dependency-example-sign-in.png" alt-text="Screenshot that shows an example sign-in log showing an Application calling a Resource. This scenario is also known as a service dependency." lightbox="media/troubleshoot-conditional-access/service-dependency-example-sign-in.png"::: |
active-directory | App Only Access Primer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-only-access-primer.md | Always follow the principle of least privilege: you should never request app rol ## Designing and publishing app roles for a resource service -If you're building a service on Microsoft Entra ID that exposes APIs for other clients to call, you may wish to support automated access with app roles (app-only permissions). You can define the app roles for your application in the **App roles** section of your app registration in Microsoft Entra portal. For more information on how to create app roles, see [Declare roles for an application](./howto-add-app-roles-in-apps.md#declare-roles-for-an-application). +If you're building a service on Microsoft Entra ID that exposes APIs for other clients to call, you may wish to support automated access with app roles (app-only permissions). You can define the app roles for your application in the **App roles** section of your app registration in Microsoft Entra admin center. For more information on how to create app roles, see [Declare roles for an application](./howto-add-app-roles-in-apps.md#declare-roles-for-an-application). When exposing app roles for others to use, provide clear descriptions of the scenario to the admin who is going to assign them. App roles should generally be as narrow as possible and support specific functional scenarios, since app-only access isn't constrained by user rights. Avoid exposing a single role that grants full `read` or full `read/write` access to all APIs and resources your service contains. |
active-directory | Configurable Token Lifetimes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/configurable-token-lifetimes.md | ID tokens are passed to websites and native clients. ID tokens contain profile i You cannot set token lifetime policies for refresh tokens and session tokens. For lifetime, timeout, and revocation information on refresh tokens, see [Refresh tokens](refresh-tokens.md). > [!IMPORTANT]-> As of January 30, 2021 you cannot configure refresh and session token lifetimes. Microsoft Entra ID no longer honors refresh and session token configuration in existing policies. New tokens issued after existing tokens have expired are now set to the [default configuration](#configurable-token-lifetime-properties). You can still configure access, SAML, and ID token lifetimes after the refresh and session token configuration retirement. +> As of January 30, 2021 you cannot configure refresh and session token lifetimes. Microsoft Entra no longer honors refresh and session token configuration in existing policies. New tokens issued after existing tokens have expired are now set to the [default configuration](#configurable-token-lifetime-properties). You can still configure access, SAML, and ID token lifetimes after the refresh and session token configuration retirement. > > Existing token's lifetime will not be changed. After they expire, a new token will be issued based on the default value. > |
active-directory | Desktop Quickstart Portal Nodejs Desktop | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/desktop-quickstart-portal-nodejs-desktop.md | -> For the code sample for this quickstart to work, you need to add a reply URL as **msal://redirect**. +> For the code sample for this quickstart to work, you need to add a reply URL as `msal://redirect`. > > <button id="makechanges" class="nextstepaction configure-app-button"> Make these changes for me </button> > |
active-directory | Developer Glossary | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/developer-glossary.md | For more information, see [Application and Service Principal Objects][AAD-App-SP In order to allow an application to integrate with and delegate Identity and Access Management functions to Microsoft Entra ID, it must be registered with a Microsoft Entra [tenant](#tenant). When you register your application with Microsoft Entra ID, you're providing an identity configuration for your application, allowing it to integrate with Microsoft Entra ID and use features like: -- Robust management of single sign-on using Microsoft Entra Identity Management and [OpenID Connect][OpenIDConnect] protocol implementation+- Robust management of single sign-on using Microsoft Entra identity management and [OpenID Connect][OpenIDConnect] protocol implementation - Brokered access to [protected resources](#resource-server) by [client applications](#client-application), via OAuth 2.0 [authorization server](#authorization-server) - [Consent framework](#consent) for managing client access to protected resources, based on resource owner authorization. |
active-directory | Developer Support Help Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/developer-support-help-options.md | If you can't find an answer to your problem by searching Microsoft Q&A, submit a If you need help with one of the Microsoft Authentication Libraries (MSAL), open an issue in its repository on GitHub. +<!-- docutune:disable --> + | MSAL | GitHub issues URL | | - | | | MSAL for Android | https://github.com/AzureAD/microsoft-authentication-library-for-android/issues | If you need help with one of the Microsoft Authentication Libraries (MSAL), open | MSAL Python | https://github.com/AzureAD/microsoft-authentication-library-for-python/issues | | MSAL React | https://github.com/AzureAD/microsoft-authentication-library-for-js/issues | +<!-- docutune:enable --> + ## Stay informed of updates and new releases <div class='icon is-large'> |
active-directory | Federation Metadata | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/federation-metadata.md | + + Title: Azure AD federation metadata +description: This article describes the federation metadata document that Microsoft Entra ID publishes for services that accept Microsoft Entra ID tokens. +++++++ Last updated : 09/07/2023++++++# Federation metadata ++Microsoft Entra ID publishes a federation metadata document for services that is configured to accept the security tokens that Microsoft Entra ID issues. The federation metadata document format is described in the [Web Services Federation Language (WS-Federation) Version 1.2](https://docs.oasis-open.org/wsfed/federation/v1.2/os/ws-federation-1.2-spec-os.html), which extends [Metadata for the OASIS Security Assertion Markup Language (SAML) v2.0](https://docs.oasis-open.org/security/saml/v2.0/saml-metadata-2.0-os.pdf). ++## Tenant-specific and tenant-independent metadata endpoints ++Microsoft Entra ID publishes tenant-specific and tenant-independent endpoints. ++Tenant-specific endpoints are designed for a particular tenant. The tenant-specific federation metadata includes information about the tenant, including tenant-specific issuer and endpoint information. Applications that restrict access to a single tenant use tenant-specific endpoints. ++Tenant-independent endpoints provide information that is common to all Microsoft Entra tenants. This information applies to tenants hosted at *login.microsoftonline.com* and is shared across tenants. Tenant-independent endpoints are recommended for multi-tenant applications, since they are not associated with any particular tenant. ++## Federation metadata endpoints ++Microsoft Entra ID publishes federation metadata at `https://login.microsoftonline.com/<TenantDomainName>/FederationMetadata/2007-06/FederationMetadata.xml`. ++For **tenant-specific endpoints**, the `TenantDomainName` can be one of the following types: ++* A registered domain name of an Azure AD tenant, such as: `contoso.onmicrosoft.com`. +* The immutable tenant ID of the domain, such as `72f988bf-86f1-41af-91ab-2d7cd011db45`. ++For **tenant-independent endpoints**, the `TenantDomainName` is `common`. This document lists only the Federation Metadata elements that are common to all Azure AD tenants that are hosted at login.microsoftonline.com. ++For example, a tenant-specific endpoint might be `https://login.microsoftonline.com/contoso.onmicrosoft.com/FederationMetadata/2007-06/FederationMetadata.xml`. The tenant-independent endpoint is [https://login.microsoftonline.com/common/FederationMetadata/2007-06/FederationMetadata.xml](https://login.microsoftonline.com/common/FederationMetadata/2007-06/FederationMetadata.xml). You can view the federation metadata document by typing this URL in a browser. ++## Contents of federation metadata ++The following section provides information needed by services that consume the tokens issued by Azure AD. ++### Entity ID ++The `EntityDescriptor` element contains an `EntityID` attribute. The value of the `EntityID` attribute represents the issuer, that is, the security token service (STS) that issued the token. It is important to validate the issuer when you receive a token. ++The following metadata shows a sample tenant-specific `EntityDescriptor` element with an `EntityID` element. ++```xml +<EntityDescriptor +xmlns="urn:oasis:names:tc:SAML:2.0:metadata" +ID="_b827a749-cfcb-46b3-ab8b-9f6d14a1294b" +entityID="https://sts.windows.net/72f988bf-86f1-41af-91ab-2d7cd011db45/"> +``` ++You can replace the tenant ID in the tenant-independent endpoint with your tenant ID to create a tenant-specific `EntityID` value. The resulting value will be the same as the token issuer. The strategy allows a multi-tenant application to validate the issuer for a given tenant. ++The following metadata shows a sample tenant-independent `EntityID` element. Please note, that the `{tenant}` is a literal, not a placeholder. ++```xml +<EntityDescriptor +xmlns="urn:oasis:names:tc:SAML:2.0:metadata" +ID="="_0e5bd9d0-49ef-4258-bc15-21ce143b61bd" +entityID="https://sts.windows.net/{tenant}/"> +``` ++### Token signing certificates ++When a service receives a token that is issued by an Azure AD tenant, the signature of the token must be validated with a signing key that is published in the federation metadata document. The federation metadata includes the public portion of the certificates that the tenants use for token signing. The certificate raw bytes appear in the `KeyDescriptor` element. The token signing certificate is valid for signing only when the value of the `use` attribute is `signing`. ++A federation metadata document published by Azure AD can have multiple signing keys, such as when Azure AD is preparing to update the signing certificate. When a federation metadata document includes more than one certificate, a service that is validating the tokens should support all certificates in the document. ++The following metadata shows a sample `KeyDescriptor` element with a signing key. ++```xml +<KeyDescriptor use="signing"> +<KeyInfo xmlns="https://www.w3.org/2000/09/xmldsig#"> +<X509Data> +<X509Certificate> +MIIDPjCCAiqgAwIBAgIQVWmXY/+9RqFTeGY1D711EORX/lVXpr+ecGgqfUWF8MPB07XkYuJ54DAuYT318+2XrzMjOtqkT94VkXmxv6dFGhG8YZ8vNMPd4tdj9c0lpvWQdqXtL1TlFRpD/P6UMEigfN0c9oWDg9U7Ilymgei0UXtf1gtcQbc5sSQU0S4vr9YJp2gLFIGK11Iqg4XSGdcI0QWLLkkC6cBukhVnd6BCYbLjTYy3fNs4DzNdemJlxGl8sLexFytBF6YApvSdus3nFXaMCtBGx16HzkK9ne3lobAwL2o79bP4imEGqg+ibvyNmbrwFGnQrBc1jTF9LyQX9q+louxVfHs6ZiVwIDAQABo2IwYDBeBgNVHQEEVzBVgBCxDDsLd8xkfOLKm4Q/SzjtoS8wLTErMCkGA1UEAxMiYWNjb3VudHMuYWNjZXNzY29udHJvbC53aW5kb3dzLm5ldIIQVWmXY/+9RqFA/OG9kFulHDAJBgUrDgMCHQUAA4IBAQAkJtxxm/ErgySlNk69+1odTMP8Oy6L0H17z7XGG3w4TqvTUSWaxD4hSFJ0e7mHLQLQD7oV/erACXwSZn2pMoZ89MBDjOMQA+e6QzGB7jmSzPTNmQgMLA8fWCfqPrz6zgH+1F1gNp8hJY57kfeVPBiyjuBmlTEBsBlzolY9dd/55qqfQk6cgSeCbHCy/RU/iep0+UsRMlSgPNNmqhj5gmN2AFVCN96zF694LwuPae5CeR2ZcVknexOWHYjFM0MgUSw0ubnGl0h9AJgGyhvNGcjQqu9vd1xkupFgaN+f7P3p3EVN5csBg5H94jEcQZT7EKeTiZ6bTrpDAnrr8tDCy8ng +</X509Certificate> +</X509Data> +</KeyInfo> +</KeyDescriptor> + ``` ++The `KeyDescriptor` element appears in two places in the federation metadata document; in the WS-Federation-specific section and the SAML-specific section. The certificates published in both sections will be the same. ++In the WS-Federation-specific section, a WS-Federation metadata reader would read the certificates from a `RoleDescriptor` element with the `SecurityTokenServiceType` type. ++The following metadata shows a sample `RoleDescriptor` element. ++```xml +<RoleDescriptor xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance" xmlns:fed="https://docs.oasis-open.org/wsfed/federation/200706" xsi:type="fed:SecurityTokenServiceType" protocolSupportEnumeration="https://docs.oasis-open.org/wsfed/federation/200706"> +``` ++In the SAML-specific section, a WS-Federation metadata reader would read the certificates from a `IDPSSODescriptor` element. ++The following metadata shows a sample `IDPSSODescriptor` element. ++```xml +<IDPSSODescriptor protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol"> +``` +There are no differences in the format of tenant-specific and tenant-independent certificates. ++### WS-Federation endpoint URL ++The federation metadata includes the URL that is Azure AD uses for single sign-in and single sign-out in WS-Federation protocol. This endpoint appears in the `PassiveRequestorEndpoint` element. ++The following metadata shows a sample `PassiveRequestorEndpoint` element for a tenant-specific endpoint. ++```xml +<fed:PassiveRequestorEndpoint> +<EndpointReference xmlns="https://www.w3.org/2005/08/addressing"> +<Address> +https://login.microsoftonline.com/72f988bf-86f1-41af-91ab-2d7cd011db45/wsfed +</Address> +</EndpointReference> +</fed:PassiveRequestorEndpoint> +``` ++For the tenant-independent endpoint, the WS-Federation URL appears in the WS-Federation endpoint, as shown in the following sample. ++```xml +<fed:PassiveRequestorEndpoint> +<EndpointReference xmlns="https://www.w3.org/2005/08/addressing"> +<Address> +https://login.microsoftonline.com/common/wsfed +</Address> +</EndpointReference> +</fed:PassiveRequestorEndpoint> +``` ++### SAML protocol endpoint URL ++The federation metadata includes the URL that Azure AD uses for single sign-in and single sign-out in SAML 2.0 protocol. These endpoints appear in the `IDPSSODescriptor` element. ++The sign-in and sign-out URLs appear in the `SingleSignOnService` and `SingleLogoutService` elements. ++The following metadata shows a sample `PassiveResistorEndpoint` for a tenant-specific endpoint. ++```xml +<IDPSSODescriptor protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol"> +… + <SingleLogoutService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="https://login.microsoftonline.com/contoso.onmicrosoft.com/saml2" /> + <SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="https://login.microsoftonline.com/contoso.onmicrosoft.com /saml2" /> + </IDPSSODescriptor> +``` ++Similarly the endpoints for the common SAML 2.0 protocol endpoints are published in the tenant-independent federation metadata, as shown in the following sample. ++```xml +<IDPSSODescriptor protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol"> +… + <SingleLogoutService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="https://login.microsoftonline.com/common/saml2" /> + <SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="https://login.microsoftonline.com/common/saml2" /> + </IDPSSODescriptor> +``` |
active-directory | How To Integrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/how-to-integrate.md | Integration with the Microsoft identity platform comes with benefits that do not **Multi-factor authentication.** The Microsoft identity platform provides native multi-factor authentication. IT administrators can require multi-factor authentication to access your application, so that you do not have to code this support yourself. Learn more about [Multi-Factor Authentication](/azure/multi-factor-authentication/). -**Anomalous sign in detection.** The Microsoft identity platform processes more than a billion sign-ins a day, while using machine learning algorithms to detect suspicious activity and notify IT administrators of possible problems. By supporting the Microsoft identity platform sign-in, your application gets the benefit of this protection. Learn more about [viewing Microsoft Entra access report](../reports-monitoring/overview-reports.md). +**Anomalous sign in detection.** The Microsoft identity platform processes more than a billion sign-ins a day, while using machine learning algorithms to detect suspicious activity and notify IT administrators of possible problems. By supporting the Microsoft identity platform sign-in, your application gets the benefit of this protection. Learn more about [viewing Microsoft Entra reports](../reports-monitoring/overview-monitoring-health.md). **Conditional Access.** In addition to multi-factor authentication, administrators can require specific conditions be met before users can sign-in to your application. Conditions that can be set include the IP address range of client devices, membership in specified groups, and the state of the device being used for access. Learn more about [Microsoft Entra Conditional Access](../conditional-access/overview.md). |
active-directory | Howto Create Self Signed Certificate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-create-self-signed-certificate.md | -For testing, you can use a self-signed public certificate instead of a Certificate Authority (CA)-signed certificate. In this how-to, you'll use Windows PowerShell to create and export a self-signed certificate. +For testing, you can use a self-signed public certificate instead of a Certificate Authority (CA)-signed certificate. In this how-to, you'll use PowerShell to create and export a self-signed certificate. > [!CAUTION] > Self-signed certificates are not trusted by default and they can be difficult to maintain. Also, they may use outdated hash and cipher suites that may not be strong. For better security, purchase a certificate signed by a well-known certificate authority. To customize the start and expiry date and other properties of the certificate, ## Create and export your public certificate -Use the certificate you create using this method to authenticate from an application running from your machine. For example, authenticate from Windows PowerShell. +Use the certificate you create using this method to authenticate from an application running from your machine. For example, authenticate from PowerShell. In a PowerShell prompt, run the following command and leave the PowerShell console session open. Replace `{certificateName}` with the name that you wish to give to your certificate. |
active-directory | Msal Compare Msal Js And Adal Js | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-compare-msal-js-and-adal-js.md | -[Microsoft Authentication Library for JavaScript](https://github.com/AzureAD/microsoft-authentication-library-for-js) (MSAL.js, also known as *msal-browser*) 2.x is the authentication library we recommend using with JavaScript applications on the Microsoft identity platform. This article highlights the changes you need to make to migrate an app that uses the ADAL.js to use MSAL.js 2.x +[Microsoft Authentication Library for JavaScript](https://github.com/AzureAD/microsoft-authentication-library-for-js) (MSAL.js, also known as `msal-browser`) 2.x is the authentication library we recommend using with JavaScript applications on the Microsoft identity platform. This article highlights the changes you need to make to migrate an app that uses the ADAL.js to use MSAL.js 2.x > [!NOTE] > We strongly recommend MSAL.js 2.x over MSAL.js 1.x. The auth code grant flow is more secure and allows single-page applications to maintain a good user experience despite the privacy measures browsers like Safari have implemented to block 3rd party cookies, among other benefits. |
active-directory | Msal Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-migration.md | If any of your applications use the Azure Active Directory Authentication Librar If you've developed apps against Azure Active Directory (v1.0) endpoint in the past, you're likely using ADAL. Since Microsoft identity platform (v2.0) endpoint has changed significantly, the new library (MSAL) was entirely built for the new endpoint. -The following diagram shows the v2.0 vs v1.0 endpoint experience at a high level, including the app registration experience, SDKs, endpoints, and supported identities. --![Diagram that shows the v1.0 versus the v2.0 architecture.](../azuread-dev/media/about-microsoft-identity-platform/about-microsoft-identity-platform.svg) --MSAL leverages all the [benefits of Microsoft identity platform (v2.0) endpoint](../azuread-dev/azure-ad-endpoint-comparison.md). - MSAL is designed to enable a secure solution without developers having to worry about the implementation details. It simplifies and manages acquiring, managing, caching, and refreshing tokens, and uses best practices for resilience. We recommend you use MSAL to [increase the resilience of authentication and authorization in client applications that you develop](../architecture/resilience-client-app.md?tabs=csharp#use-the-microsoft-authentication-library-msal). MSAL provides multiple benefits over ADAL, including the following features: |
active-directory | Quickstart Single Page App React Sign In | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-single-page-app-react-sign-in.md | Run the project with a web server by using Node.js: - [Quickstart: Protect an ASP.NET Core web API with the Microsoft identity platform](./quickstart-web-api-aspnet-core-protect-api.md) -- Learn more by building this React SPA from scratch with the following series - [Tutorial: Sign in users and call Microsoft Graph](./single-page-app-tutorial-01-register-app.md)+- Learn more by building this React SPA from scratch with the following series - [Tutorial: Sign in users and call Microsoft Graph](./tutorial-single-page-app-react-register-app.md) |
active-directory | Quickstart V2 Javascript Auth Code React | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-javascript-auth-code-react.md | -> > [Tutorial: Sign in users and call Microsoft Graph](./single-page-app-tutorial-01-register-app.md) +> > [Tutorial: Sign in users and call Microsoft Graph](./tutorial-single-page-app-react-register-app.md) |
active-directory | Saml Protocol Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/saml-protocol-reference.md | Microsoft Entra ID exposes tenant-specific and common (tenant-independent) SSO a ## Next steps -For information about the federation metadata documents that Microsoft Entra ID publishes, see [Federation Metadata](../azuread-dev/azure-ad-federation-metadata.md). +For information about the federation metadata documents that Microsoft Entra ID publishes, see [Federation Metadata](federation-metadata.md). |
active-directory | Scenario Daemon App Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-app-configuration.md | import com.microsoft.aad.msal4j.SilentParameters; # [Node.js](#tab/nodejs) -Install the packages by running `npm install` in the folder where *package.json* file resides. Then, import **msal-node** package: +Install the packages by running `npm install` in the folder where `package.json` file resides. Then, import the `msal-node` package: ```JavaScript const msal = require('@azure/msal-node'); |
active-directory | Signing Key Rollover | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/signing-key-rollover.md | -The Microsoft identity platform uses public-key cryptography built on industry standards to establish trust between itself and the applications that use it. In practical terms, this works in the following way: The Microsoft identity platform uses a signing key that consists of a public and private key pair. When a user signs in to an application that uses the Microsoft identity platform for authentication, the Microsoft identity platform creates a security token that contains information about the user. This token is signed by the Microsoft identity platform using its private key before it's sent back to the application. To verify that the token is valid and originated from Microsoft identity platform, the application must validate the tokenΓÇÖs signature using the public keys exposed by the Microsoft identity platform that is contained in the tenantΓÇÖs [OpenID Connect discovery document](https://openid.net/specs/openid-connect-discovery-1_0.html) or SAML/WS-Fed [federation metadata document](../azuread-dev/azure-ad-federation-metadata.md). +The Microsoft identity platform uses public-key cryptography built on industry standards to establish trust between itself and the applications that use it. In practical terms, this works in the following way: The Microsoft identity platform uses a signing key that consists of a public and private key pair. When a user signs in to an application that uses the Microsoft identity platform for authentication, the Microsoft identity platform creates a security token that contains information about the user. This token is signed by the Microsoft identity platform using its private key before it's sent back to the application. To verify that the token is valid and originated from Microsoft identity platform, the application must validate the tokenΓÇÖs signature using the public keys exposed by the Microsoft identity platform that is contained in the tenantΓÇÖs [OpenID Connect discovery document](https://openid.net/specs/openid-connect-discovery-1_0.html) or SAML/WS-Fed [federation metadata document](federation-metadata.md). For security purposes, the Microsoft identity platformΓÇÖs signing key rolls on a periodic basis and, in the case of an emergency, could be rolled over immediately. There's no set or guaranteed time between these key rolls - any application that integrates with the Microsoft identity platform should be prepared to handle a key rollover event no matter how frequently it may occur. If your application doesn't handle sudden refreshes, and attempts to use an expired key to verify the signature on a token, your application will incorrectly reject the token. Checking every 24 hours for updates is a best practice, with throttled (once every five minutes at most) immediate refreshes of the key document if a token is encountered that doesn't validate with the keys in your application's cache. app.UseJwtBearerAuthentication( }); ``` -### <a name="passport"></a>Web applications / APIs protecting resources using Node.js passport-azure-ad module +### <a name="passport"></a>Web applications / APIs protecting resources using Node.js `passport-azure-ad` module + If your application is using the Node.js passport-ad module, it already has the necessary logic to handle key rollover automatically. You can confirm that your application passport-ad by searching for the following snippet in your application's app.js If the key is being stored somewhere or hardcoded in your application, you can m You can validate whether your application supports automatic key rollover by using the following PowerShell scripts. -To check and update signing keys with PowerShell, you'll need the [MSIdentityTools](https://www.powershellgallery.com/packages/MSIdentityTools) PowerShell Module. +To check and update signing keys with PowerShell, you'll need the [MSIdentityTools](https://www.powershellgallery.com/packages/MSIdentityTools) PowerShell module. -1. Install the [MSIdentityTools](https://www.powershellgallery.com/packages/MSIdentityTools) PowerShell Module: +1. Install the [MSIdentityTools](https://www.powershellgallery.com/packages/MSIdentityTools) PowerShell module: ```powershell Install-Module -Name MSIdentityTools To check and update signing keys with PowerShell, you'll need the [MSIdentityToo ## How to perform a manual rollover if your application does not support automatic rollover If your application doesn't support automatic rollover, you need to establish a process that periodically monitors Microsoft identity platform's signing keys and performs a manual rollover accordingly. -To check and update signing keys with PowerShell, you'll need the [MSIdentityTools](https://www.powershellgallery.com/packages/MSIdentityTools) PowerShell Module. +To check and update signing keys with PowerShell, you'll need the [`MSIdentityTools`](https://www.powershellgallery.com/packages/MSIdentityTools) PowerShell module. -1. Install the [MSIdentityTools](https://www.powershellgallery.com/packages/MSIdentityTools) PowerShell Module: +1. Install the [`MSIdentityTools`](https://www.powershellgallery.com/packages/MSIdentityTools) PowerShell module: ```powershell Install-Module -Name MSIdentityTools |
active-directory | Single Page App Tutorial 04 Call Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-page-app-tutorial-04-call-api.md | - Title: "Tutorial: Call an API from a React single-page app" -description: Call an API from a React single-page app. ------- Previously updated : 11/28/2022-#Customer intent: As a React developer, I want to know how to create a user interface and access the Microsoft Graph API ---# Tutorial: Call an API from a React single-page app --Before being able to interact with the single-page app (SPA), we need to initiate an API call to Microsoft Graph and create the user interface (UI) for the application. After this is added, we can sign in to the application and get profile data information from the Microsoft Graph API. --In this tutorial, you learn how to: --> [!div class="checklist"] -> * Create the API call to Microsoft Graph -> * Create a UI for the application -> * Import and use components in the application -> * Create a component that renders the user's profile information -> * Call the API from the application --## Prerequisites --* Completion of the prerequisites and steps in [Tutorial: Create components for sign in and sign out in a React single-page app](single-page-app-tutorial-03-sign-in-users.md). --## Creating a helper the Microsoft Graph client --To allow the SPA to request access to Microsoft Graph, a reference to the `graphConfig` object needs to be added. This contains the Graph REST API endpoint defined in *authConfig.js* file. --### [Visual Studio](#tab/visual-studio) --1. Right click on the *src* folder, select **Add** > **New Item**. Create a new file called *graph.js* and select **Add**. -1. Replace the contents of the file with the following code snippet to request access to Microsoft Graph; -- :::code language="javascript" source="~/ms-identity-docs-code-javascript/react-spa/src/graph.js" ::: --### [Visual Studio Code](#tab/visual-studio-code) --1. In the *src* folder, create a new file called *graph.js*. -1. Add the following code snippet to request access to Microsoft Graph; -- :::code language="javascript" source="~/ms-identity-docs-code-javascript/react-spa/src/graph.js" ::: ----## Change filename and add required imports --By default, the application runs via a JavaScript file called *App.js*. It needs to be changed to *App.jsx* file, which is an extension that allows a developer to write HTML in React. --1. Rename *App.js* to *App.jsx*. -1. Replace the existing imports with the following snippet; -- ```javascript - import React, { useState } from 'react'; -- import { PageLayout } from './components/PageLayout'; - import { loginRequest } from './authConfig'; - import { callMsGraph } from './graph'; - import { ProfileData } from './components/ProfileData'; -- import { AuthenticatedTemplate, UnauthenticatedTemplate, useMsal } from '@azure/msal-react'; -- import './App.css'; -- import Button from 'react-bootstrap/Button'; - ``` --### Adding the `ProfileContent` function --The `ProfileContent` function is used to render the user's profile information. In the *App.jsx* file, add the following code below your imports: --```javascript --/** -* Renders information about the signed-in user or a button to retrieve data about the user -*/ -const ProfileContent = () => { - const { instance, accounts } = useMsal(); - const [graphData, setGraphData] = useState(null); - - function RequestProfileData() { - // Silently acquires an access token which is then attached to a request for MS Graph data - instance - .acquireTokenSilent({ - ...loginRequest, - account: accounts[0], - }) - .then((response) => { - callMsGraph(response.accessToken).then((response) => setGraphData(response)); - }); - } - - return ( - <> - <h5 className="card-title">Welcome {accounts[0].name}</h5> - <br/> - {graphData ? ( - <ProfileData graphData={graphData} /> - ) : ( - <Button variant="secondary" onClick={RequestProfileData}> - Request Profile Information - </Button> - )} - </> - ); -}; -``` --### Replacing the default function to render authenticated information --The following code will render based on whether the user is authenticated or not. Replace the default function `App()` to render authenticated information with the following code: --```javascript -/** -* If a user is authenticated the ProfileContent component above is rendered. Otherwise a message indicating a user is not authenticated is rendered. -*/ -const MainContent = () => { - return ( - <div className="App"> - <AuthenticatedTemplate> - <ProfileContent /> - </AuthenticatedTemplate> - - <UnauthenticatedTemplate> - <h5> - <center> - Please sign-in to see your profile information. - </center> - </h5> - </UnauthenticatedTemplate> - </div> - ); -}; - -export default function App() { - return ( - <PageLayout> - <center> - <MainContent /> - </center> - </PageLayout> - ); -} -``` --## Calling the API from the application --All the required code snippets have been added, so the application can now be called and tested in a web browser. --1. Navigate to the browser previously opened in [Tutorial: Prepare an application for authentication](./single-page-app-tutorial-02-prepare-spa.md). If your browser is closed, open a new window with the address `http://localhost:3000/`. --1. Select the **Sign In** button. For the purposes of this tutorial, choose the **Sign in using Popup** option. -- :::image type="content" source="./media/single-page-app-tutorial-04-call-api/sign-in-window.png" alt-text="Screenshot of React App sign-in window."::: --1. After the popup window appears with the sign-in options, select the account with which to sign-in. -- :::image type="content" source="./media/single-page-app-tutorial-04-call-api/pick-account.png" alt-text="Screenshot requesting user to choose Microsoft account to sign into."::: --1. A second window may appear indicating that a code will be sent to your email address. If this happens, select **Send code**. Open the email from the sender **Microsoft account team**, and enter the 7-digit single-use code. Once entered, select **Sign in**. -- :::image type="content" source="./media/single-page-app-tutorial-04-call-api/enter-code.png" alt-text="Screenshot prompting user to enter verification code to sign-in."::: --1. For **Stay signed in**, you can select either **No** or **Yes**. -- :::image type="content" source="./media/single-page-app-tutorial-04-call-api/stay-signed-in.png" alt-text="Screenshot prompting user to decide whether to stay signed in or not."::: --1. The app will now ask for permission to sign-in and access data. Select **Accept** to continue. -- :::image type="content" source="./media/single-page-app-tutorial-04-call-api/permissions-requested.png" alt-text="Screenshot prompting user to allow the application to access permissions."::: --1. The SPA will now display a button saying **Request Profile Information**. Select it to display the Microsoft Graph profile data acquired from the Microsoft Graph API. -- :::image type="content" source="./media/single-page-app-tutorial-04-call-api/display-api-call-results.png" alt-text="Screenshot of React App depicting the results of the API call."::: --## Next steps --Learn how to use the Microsoft identity platform by trying out the following tutorial series on how to build a web API. --> [!div class="nextstepaction"] -> [Tutorial: Register a web API with the Microsoft identity platform](web-api-tutorial-01-register-app.md) |
active-directory | Spa Quickstart Portal Javascript Auth Code React | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/spa-quickstart-portal-javascript-auth-code-react.md | -> > [Tutorial: Sign in users and call Microsoft Graph from a React single-page app](./single-page-app-tutorial-01-register-app.md) +> > [Tutorial: Sign in users and call Microsoft Graph from a React single-page app](./tutorial-single-page-app-react-register-app.md) |
active-directory | Tutorial Single Page App React Call Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-single-page-app-react-call-api.md | + + Title: "Tutorial: Call an API from a React single-page app" +description: Call an API from a React single-page app. +++++++ Last updated : 09/25/2023+#Customer intent: As a React developer, I want to know how to create a user interface and access the Microsoft Graph API +++# Tutorial: Call an API from a React single-page app ++Before being able to interact with the single-page app (SPA), we need to initiate an API call to Microsoft Graph and create the user interface (UI) for the application. After this is added, we can sign in to the application and get profile data information from the Microsoft Graph API. ++In this tutorial: ++> [!div class="checklist"] +> * Create the API call to Microsoft Graph +> * Create a UI for the application +> * Import and use components in the application +> * Create a component that renders the user's profile information +> * Call the API from the application ++## Prerequisites ++* Completion of the prerequisites and steps in [Tutorial: Create components for sign in and sign out in a React single-page app](tutorial-single-page-app-react-sign-in-users.md). ++## Create the API call to Microsoft Graph ++To allow the SPA to request access to Microsoft Graph, a reference to the `graphConfig` object needs to be added. This contains the Graph REST API endpoint defined in *authConfig.js* file. ++- In the *src* folder, open *graph.js* and replace the contents of the file with the following code snippet to request access to Microsoft Graph. ++ :::code language="javascript" source="~/ms-identity-docs-code-javascript/react-spa/src/graph.js" ::: ++## Update imports to use components in the application ++The following code snippet imports the UI components that were created previously to the application. It also imports the required components from the `@azure/msal-react` package. These components will be used to render the user interface and call the API. ++- In the *src* folder, open *App.jsx* and replace the contents of the file with the following code snippet to request access. ++ ```javascript + import React, { useState } from 'react'; + + import { PageLayout } from './components/PageLayout'; + import { loginRequest } from './authConfig'; + import { callMsGraph } from './graph'; + import { ProfileData } from './components/ProfileData'; + + import { AuthenticatedTemplate, UnauthenticatedTemplate, useMsal } from '@azure/msal-react'; + + import './App.css'; + + import Button from 'react-bootstrap/Button'; + ``` ++### Add the `ProfileContent` function ++The `ProfileContent` function is used to render the user's profile information after the user has signed in. This function will be called when the user selects the **Request Profile Information** button. ++- In the *App.jsx* file, add the following code below your imports: ++ ```JavaScript + /** + * Renders information about the signed-in user or a button to retrieve data about the user + */ + const ProfileContent = () => { + const { instance, accounts } = useMsal(); + const [graphData, setGraphData] = useState(null); + + function RequestProfileData() { + // Silently acquires an access token which is then attached to a request for MS Graph data + instance + .acquireTokenSilent({ + ...loginRequest, + account: accounts[0], + }) + .then((response) => { + callMsGraph(response.accessToken).then((response) => setGraphData(response)); + }); + } + + return ( + <> + <h5 className="card-title">Welcome {accounts[0].name}</h5> + <br/> + {graphData ? ( + <ProfileData graphData={graphData} /> + ) : ( + <Button variant="secondary" onClick={RequestProfileData}> + Request Profile Information + </Button> + )} + </> + ); + }; + ``` ++### Add the `MainContent` function ++The `MainContent` function is used to render the user's profile information after the user has signed in. This function will be called when the user selects the **Request Profile Information** button. ++- In the *App.jsx* file, replace the `App()` function with the following code: ++ ```JavaScript + /** + * If a user is authenticated the ProfileContent component above is rendered. Otherwise a message indicating a user is not authenticated is rendered. + */ + const MainContent = () => { + return ( + <div className="App"> + <AuthenticatedTemplate> + <ProfileContent /> + </AuthenticatedTemplate> + + <UnauthenticatedTemplate> + <h5> + <center> + Please sign-in to see your profile information. + </center> + </h5> + </UnauthenticatedTemplate> + </div> + ); + }; + + export default function App() { + return ( + <PageLayout> + <center> + <MainContent /> + </center> + </PageLayout> + ); + } + ``` ++## Call the Microsoft Graph API from the application ++All the required code snippets have been added, so the application can now be called and tested in a web browser. ++1. Navigate to the browser previously opened in [Tutorial: Prepare an application for authentication](./tutorial-single-page-app-react-prepare-spa.md). If your browser is closed, open a new window with the address `http://localhost:3000/`. ++1. Select the **Sign In** button. For the purposes of this tutorial, choose the **Sign in using Popup** option. ++ :::image type="content" source="./media/single-page-app-tutorial-04-call-api/sign-in-window.png" alt-text="Screenshot of React App sign-in window."::: ++1. After the popup window appears with the sign-in options, select the account with which to sign-in. ++ :::image type="content" source="./media/single-page-app-tutorial-04-call-api/pick-account.png" alt-text="Screenshot requesting user to choose Microsoft account to sign into."::: ++1. A second window may appear indicating that a code will be sent to your email address. If this happens, select **Send code**. Open the email from the sender **Microsoft account team**, and enter the 7-digit single-use code. Once entered, select **Sign in**. ++ :::image type="content" source="./media/single-page-app-tutorial-04-call-api/enter-code.png" alt-text="Screenshot prompting user to enter verification code to sign-in."::: ++1. For **Stay signed in**, you can select either **No** or **Yes**. ++ :::image type="content" source="./media/single-page-app-tutorial-04-call-api/stay-signed-in.png" alt-text="Screenshot prompting user to decide whether to stay signed in or not."::: ++1. The app will now ask for permission to sign-in and access data. Select **Accept** to continue. ++ :::image type="content" source="./media/single-page-app-tutorial-04-call-api/permissions-requested.png" alt-text="Screenshot prompting user to allow the application to access permissions."::: ++1. The SPA will now display a button saying **Request Profile Information**. Select it to display the Microsoft Graph profile data acquired from the Microsoft Graph API. ++ :::image type="content" source="./media/single-page-app-tutorial-04-call-api/display-api-call-results.png" alt-text="Screenshot of React App depicting the results of the API call."::: ++## Next steps ++Learn how to use the Microsoft identity platform by trying out the following tutorial series on how to build a web API. ++> [!div class="nextstepaction"] +> [Tutorial: Register a web API with the Microsoft identity platform](web-api-tutorial-01-register-app.md) |
active-directory | Tutorial Single Page App React Prepare Spa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-single-page-app-react-prepare-spa.md | + + Title: "Tutorial: Prepare an application for authentication" +description: Register a tenant application and configure it for a React SPA. ++++++++ Last updated : 09/25/2023+#Customer intent: As a React developer, I want to know how to create a new React project in an IDE and add authentication. +++# Tutorial: Prepare a Single-page application for authentication ++After registration is complete, a React project can be created using an integrated development environment (IDE). This tutorial demonstrates how to create a single-page React application using `npm` and create files needed for authentication and authorization. ++In this tutorial: ++> [!div class="checklist"] +> * Create a new React project +> * Configure the settings for the application +> * Install identity and bootstrap packages +> * Add authentication code to the application ++## Prerequisites ++* Completion of the prerequisites and steps in [Tutorial: Register an application](tutorial-single-page-app-react-register-app.md). +* Although any IDE that supports React applications can be used, the following Visual Studio IDEs are used for this tutorial. They can be downloaded from the [Downloads](https://visualstudio.microsoft.com/downloads) page. For macOS users, it's recommended to use Visual Studio Code. + - Visual Studio 2022 + - Visual Studio Code +* [Node.js](https://nodejs.org/en/download/). ++## Create a new React project ++Use the following tabs to create a React project within the IDE. ++### [Visual Studio](#tab/visual-studio) ++1. Open Visual Studio, and then select **Create a new project**. +1. Search for and choose the **Standalone JavaScript React Project** template, and then select **Next**. +1. Enter a name for the project, such as *reactspalocal*. +1. Choose a location for the project or accept the default option, and then select **Next**. +1. In **Additional information**, select **Create**. +1. From the toolbar, select **Start Without Debugging** to launch the application. A web browser will open with the address `http://localhost:3000/` by default. The browser remains open and re-renders for every saved change. +1. Create additional folders and files to achieve the following folder structure: ++ ```console + Γö£ΓöÇΓöÇΓöÇ public + Γöé ΓööΓöÇΓöÇΓöÇ https://docsupdatetracker.net/index.html + ΓööΓöÇΓöÇΓöÇsrc + Γö£ΓöÇΓöÇΓöÇ components + Γöé ΓööΓöÇΓöÇΓöÇ PageLayout.jsx + Γöé ΓööΓöÇΓöÇΓöÇ ProfileData.jsx + Γöé ΓööΓöÇΓöÇΓöÇ SignInButton.jsx + Γöé ΓööΓöÇΓöÇΓöÇ SignOutButton.jsx + ΓööΓöÇΓöÇ App.css + ΓööΓöÇΓöÇ App.jsx + ΓööΓöÇΓöÇ authConfig.js + ΓööΓöÇΓöÇ graph.js + ΓööΓöÇΓöÇ index.css + ΓööΓöÇΓöÇ index.js + ``` +++### [Visual Studio Code](#tab/visual-studio-code) ++1. Open Visual Studio Code, select **File** > **Open Folder...**. Navigate to and select the location in which to create your project. +1. Open a new terminal by selecting **Terminal** > **New Terminal**. +1. Run the following commands to create a new React project with the name *reactspalocal*, change to the new directory and start the React project. A web browser will open with the address `http://localhost:3000/` by default. The browser remains open and re-renders for every saved change. ++ ```powershell + npx create-react-app reactspalocal + cd reactspalocal + npm start + ``` ++1. Create additional folders and files to achieve the following folder structure: ++ ```console + Γö£ΓöÇΓöÇΓöÇ public + Γöé ΓööΓöÇΓöÇΓöÇ https://docsupdatetracker.net/index.html + ΓööΓöÇΓöÇΓöÇsrc + Γö£ΓöÇΓöÇΓöÇ components + Γöé ΓööΓöÇΓöÇΓöÇ PageLayout.jsx + Γöé ΓööΓöÇΓöÇΓöÇ ProfileData.jsx + Γöé ΓööΓöÇΓöÇΓöÇ SignInButton.jsx + Γöé ΓööΓöÇΓöÇΓöÇ SignOutButton.jsx + ΓööΓöÇΓöÇ App.css + ΓööΓöÇΓöÇ App.jsx + ΓööΓöÇΓöÇ authConfig.js + ΓööΓöÇΓöÇ graph.js + ΓööΓöÇΓöÇ index.css + ΓööΓöÇΓöÇ index.js + ``` +++## Install identity and bootstrap packages ++Identity related **npm** packages must be installed in the project to enable user authentication. For project styling, **Bootstrap** will be used. ++### [Visual Studio](#tab/visual-studio) ++1. In the **Solution Explorer**, right-click the **npm** option and select **Install new npm packages**. +1. Search for **@azure/msal-browser**, then select **Install Package**. Repeat for **@azure/msal-react** and **@azure/msal-common**. +1. Search for and install **react-bootstrap**. +1. Select **Close**. ++### [Visual Studio Code](#tab/visual-studio-code) ++1. In the **Terminal** bar, select the **+** icon to create a new terminal. A separate terminal window will open with the previous node terminal continuing to run in the background. +1. Ensure that the correct directory is selected (*reactspalocal*) then enter the following into the terminal to install the relevant `msal` and `bootstrap` packages. ++ ```powershell + npm install @azure/msal-browser @azure/msal-react @azure/msal-common + npm install react-bootstrap bootstrap + ``` +++To learn more about these packages refer to the documentation in [msal-browser](/javascript/api/@azure/msal-browser), [msal-common](/javascript/api/@azure/msal-common), [msal-react](/javascript/api/@azure/msal-react). ++## Creating the authentication configuration file ++1. In the *src* folder, open *authConfig.js* and add the following code snippet: ++ :::code language="javascript" source="~/ms-identity-docs-code-javascript/react-spa/src/authConfig.js" ::: ++1. Replace the following values with the values from the Microsoft Entra admin center. + - `clientId` - The identifier of the application, also referred to as the client. Replace `Enter_the_Application_Id_Here` with the **Application (client) ID** value that was recorded earlier from the overview page of the registered application. + - `authority` - This is composed of two parts: + - The *Instance* is endpoint of the cloud provider. Check with the different available endpoints in [National clouds](authentication-national-cloud.md#azure-ad-authentication-endpoints). + - The *Tenant ID* is the identifier of the tenant where the application is registered. Replace the `_Enter_the_Tenant_Info_Here` with the **Directory (tenant) ID** value that was recorded earlier from the overview page of the registered application. ++1. Save the file. ++## Modify *index.js* to include the authentication provider ++All parts of the app that require authentication must be wrapped in the [`MsalProvider`](/javascript/api/@azure/msal-react/#@azure-msal-react-msalprovider) component. You instantiate a [PublicClientApplication](/javascript/api/@azure/msal-browser/publicclientapplication) then pass it to `MsalProvider`. ++1. In the *src* folder, open *index.js* and replace the contents of the file with the following code snippet to use the `msal` packages and bootstrap styling: ++ :::code language="javascript" source="~/ms-identity-docs-code-javascript/react-spa/src/index.js" ::: ++1. Save the file. ++## Next steps ++> [!div class="nextstepaction"] +> [Tutorial: Create components for sign in and sign out in a React single-page app](tutorial-single-page-app-react-sign-in-users.md) |
active-directory | Tutorial Single Page App React Register App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-single-page-app-react-register-app.md | + + Title: "Tutorial: Register a Single-page application with the Microsoft identity platform" +description: Register an application in a Microsoft Entra tenant. ++++++++ Last updated : 02/27/2023+#Customer intent: As a React developer, I want to know how to register my application with the Microsoft identity platform so that the security token service can issue access tokens to client applications that request them. +++# Tutorial: Register a Single-page application with the Microsoft identity platform ++To interact with the Microsoft identity platform, Microsoft Entra ID must be made aware of the application you create. This tutorial shows you how to register a single-page application (SPA) in a tenant on the Microsoft Entra admin center. ++In this tutorial: ++> [!div class="checklist"] +> * Register the application in a tenant +> * Add a Redirect URI to the application +> * Record the application's unique identifiers ++## Prerequisites ++* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/). +* This Azure account must have permissions to manage applications. Any of the following Microsoft Entra roles include the required permissions: + * Application administrator + * Application developer + * Cloud application administrator ++## Register the application and record identifiers +++To complete registration, provide the application a name, specify the supported account types, and add a redirect URI. Once registered, the application **Overview** pane displays the identifiers needed in the application source code. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). +1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which to register the application. +1. Browse to **Identity** > **Applications** > **App registrations**, select **New registration**. +1. Enter a **Name** for the application, such as *NewSPA1*. +1. For **Supported account types**, select **Accounts in this organizational directory only**. For information on different account types, select the **Help me choose** option. +1. Under **Redirect URI (optional)**, use the drop-down menu to select **Single-page-application (SPA)** and enter `http://localhost:3000` into the text box. +1. Select **Register**. ++ :::image type="content" source="./media/single-page-app-tutorial-01-register-app/register-application.png" alt-text="Screenshot that shows how to enter a name and select the account type in the Azure portal."::: ++1. The application's **Overview** pane is displayed when registration is complete. Record the **Directory (tenant) ID** and the **Application (client) ID** to be used in your application source code. ++ :::image type="content" source="./media/single-page-app-tutorial-01-register-app/record-identifiers.png" alt-text="Screenshot that shows the identifier values on the overview page on the Azure portal."::: ++ >[!NOTE] + > The **Supported account types** can be changed by referring to [Modify the accounts supported by an application](howto-modify-supported-accounts.md). ++## Next steps ++> [!div class="nextstepaction"] +> [Tutorial: Prepare an application for authentication](tutorial-single-page-app-react-prepare-spa.md) |
active-directory | Tutorial Single Page App React Sign In Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-single-page-app-react-sign-in-users.md | + + Title: "Tutorial: Create components for sign in and sign out in a React single-page app" +description: Add sign in and sign out components to your React single-page app ++++++++ Last updated : 09/26/2023+#Customer intent: As a React developer, I want to know how to use functional components to add sign in and sign out experiences in my React application. +++# Tutorial: Create components for sign in and sign out in a React single page app ++Functional components are the building blocks of React apps. This tutorial demonstrates how functional components can be used to build the sign in and sign out experience in a React single-page app (SPA). The `useMsal` hook is used to retrieve an access token to allow user sign-in. ++In this tutorial: ++> [!div class="checklist"] +> +> - Add components to the application +> - Create a way of displaying the user's profile information +> - Create a layout that displays the sign in and sign out experience +> - Add the sign in and sign out experiences ++## Prerequisites ++* Completion of the prerequisites and steps in [Tutorial: Prepare an application for authentication](tutorial-single-page-app-react-prepare-spa.md). ++### Add the page layout component ++1. Open *PageLayout.jsx* and add the following code to render the page layout. The [useIsAuthenticated](/javascript/api/@azure/msal-react) hook returns whether or not a user is currently signed-in. ++ ```javascript + /* + * Copyright (c) Microsoft Corporation. All rights reserved. + * Licensed under the MIT License. + */ ++ import React from "react"; + import Navbar from "react-bootstrap/Navbar"; ++ import { useIsAuthenticated } from "@azure/msal-react"; + import { SignInButton } from "./SignInButton"; + import { SignOutButton } from "./SignOutButton"; ++ /** + * Renders the navbar component with a sign in or sign out button depending on whether or not a user is authenticated + * @param props + */ + export const PageLayout = (props) => { + const isAuthenticated = useIsAuthenticated(); ++ return ( + <> + <Navbar bg="primary" variant="dark" className="navbarStyle"> + <a className="navbar-brand" href="/"> + Microsoft Identity Platform + </a> + <div className="collapse navbar-collapse justify-content-end"> + {isAuthenticated ? <SignOutButton /> : <SignInButton />} + </div> + </Navbar> + <br /> + <br /> + <h5> + <center> + Welcome to the Microsoft Authentication Library For Javascript - + React SPA Tutorial + </center> + </h5> + <br /> + <br /> + {props.children} + </> + ); + }; + ``` ++1. Save the file. ++### Display profile information ++1. Open the *ProfileData.jsx* and add the following code, which creates a component that displays the user's profile information: ++ ```javascript + import React from "react"; + /** + * Renders information about the user obtained from MS Graph + * @param props + */ + export const ProfileData = (props) => { + return ( + <div id="profile-div"> + <p> + <strong>First Name: </strong> {props.graphData.givenName} + </p> + <p> + <strong>Last Name: </strong> {props.graphData.surname} + </p> + <p> + <strong>Email: </strong> {props.graphData.userPrincipalName} + </p> + <p> + <strong>Id: </strong> {props.graphData.id} + </p> + </div> + ); + }; + ``` ++1. Save the file. ++### Adding the sign in experience ++1. Open *SignInButton.jsx* and add the following code, which creates a button that signs in the user using either a pop-up or redirect. ++ ```javascript + import React from "react"; + import { useMsal } from "@azure/msal-react"; + import { loginRequest } from "../authConfig"; + import DropdownButton from "react-bootstrap/DropdownButton"; + import Dropdown from "react-bootstrap/Dropdown"; ++ /** + * Renders a drop down button with child buttons for logging in with a popup or redirect + * Note the [useMsal] package + */ ++ export const SignInButton = () => { + const { instance } = useMsal(); ++ const handleLogin = (loginType) => { + if (loginType === "popup") { + instance.loginPopup(loginRequest).catch((e) => { + console.log(e); + }); + } else if (loginType === "redirect") { + instance.loginRedirect(loginRequest).catch((e) => { + console.log(e); + }); + } + }; + return ( + <DropdownButton + variant="secondary" + className="ml-auto" + drop="start" + title="Sign In" + > + <Dropdown.Item as="button" onClick={() => handleLogin("popup")}> + Sign in using Popup + </Dropdown.Item> + <Dropdown.Item as="button" onClick={() => handleLogin("redirect")}> + Sign in using Redirect + </Dropdown.Item> + </DropdownButton> + ); + }; + ``` ++1. Save the file. ++### Adding the sign out experience ++1. Open *SignOutButton.jsx* and add the following code, which creates a button that signs out the user using either a pop-up or redirect. ++ ```javascript + import React from "react"; + import { useMsal } from "@azure/msal-react"; + import DropdownButton from "react-bootstrap/DropdownButton"; + import Dropdown from "react-bootstrap/Dropdown"; ++ /** + * Renders a sign out button + */ + export const SignOutButton = () => { + const { instance } = useMsal(); ++ const handleLogout = (logoutType) => { + if (logoutType === "popup") { + instance.logoutPopup({ + postLogoutRedirectUri: "/", + mainWindowRedirectUri: "/", + }); + } else if (logoutType === "redirect") { + instance.logoutRedirect({ + postLogoutRedirectUri: "/", + }); + } + }; ++ return ( + <DropdownButton + variant="secondary" + className="ml-auto" + drop="start" + title="Sign Out" + > + <Dropdown.Item as="button" onClick={() => handleLogout("popup")}> + Sign out using Popup + </Dropdown.Item> + <Dropdown.Item as="button" onClick={() => handleLogout("redirect")}> + Sign out using Redirect + </Dropdown.Item> + </DropdownButton> + ); + }; + ``` ++1. Save the file. ++## Next steps ++> [!div class="nextstepaction"] +> [Tutorial: Call an API from a React single-page app](tutorial-single-page-app-react-call-api.md) |
active-directory | Tutorial V2 Angular Auth Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-angular-auth-code.md | This tutorial uses the following libraries: | [MSAL Angular](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-angular) | Microsoft Authentication Library for JavaScript Angular Wrapper | | [MSAL Browser](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-browser) | Microsoft Authentication Library for JavaScript v2 browser package | -You can find the source code for all of the MSAL.js libraries in the [microsoft-authentication-library-for-js](https://github.com/AzureAD/microsoft-authentication-library-for-js) repository on GitHub. +You can find the source code for all of the MSAL.js libraries in the [`microsoft-authentication-library-for-js`](https://github.com/AzureAD/microsoft-authentication-library-for-js) repository on GitHub. ### Get the completed code sample To complete registration, provide the application a name, specify the supported 1. Open Visual Studio Code, select **File** > **Open Folder...**. Navigate to and select the location in which to create your project. 1. Open a new terminal by selecting **Terminal** > **New Terminal**. 1. You may need to switch terminal types. Select the down arrow next to the **+** icon in the terminal and select **Command Prompt**.-1. Run the following commands to create a new Angular project with the name _msal-angular-tutorial_, install Angular Material component libraries, MSAL Browser, MSAL Angular and generate home and profile components. +1. Run the following commands to create a new Angular project with the name `msal-angular-tutorial`, install Angular Material component libraries, MSAL Browser, MSAL Angular and generate home and profile components. ```cmd npm install -g @angular/cli |
active-directory | Tutorial V2 Javascript Auth Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-javascript-auth-code.md | To continue with the tutorial and build the application yourself, move on to the ## Create your project -Once you have [Node.js](https://nodejs.org/en/download/) installed, create a folder to host your application, for example *msal-spa-tutorial*. +Once you have [Node.js](https://nodejs.org/en/download/) installed, create a folder to host your application, such as `msal-spa-tutorial`. Next, implement a small [Express](https://expressjs.com/) web server to serve your *https://docsupdatetracker.net/index.html* file. |
active-directory | V2 Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-overview.md | Choose your preferred [application scenario](authentication-flows-app-scenarios. For a more in-depth look at building applications using the Microsoft identity platform, see our multipart tutorial series for the following applications: -- [React Single-page app (SPA)](single-page-app-tutorial-01-register-app.md)+- [React Single-page app (SPA)](tutorial-single-page-app-react-register-app.md) - [.NET Web app](web-app-tutorial-01-register-application.md) - [.NET Web API](web-api-tutorial-01-register-app.md) |
active-directory | Concept Primary Refresh Token | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/concept-primary-refresh-token.md | The PRT is issued during user authentication on a Windows 10 or newer device in In Microsoft Entra registered device scenarios, the Microsoft Entra WAM plugin is the primary authority for the PRT since Windows logon isn't happening with this Microsoft Entra account. > [!NOTE]-> 3rd party identity providers need to support the WS-Trust protocol to enable PRT issuance on Windows 10 or newer devices. Without WS-Trust, PRT cannot be issued to users on Microsoft Entra hybrid joined or Microsoft Entra joined devices. On ADFS only usernamemixed endpoints are required. Both adfs/services/trust/2005/windowstransport and adfs/services/trust/13/windowstransport should be enabled as intranet facing endpoints only and **must NOT be exposed** as extranet facing endpoints through the Web Application Proxy. +> 3rd party identity providers need to support the WS-Trust protocol to enable PRT issuance on Windows 10 or newer devices. Without WS-Trust, PRT cannot be issued to users on Microsoft Entra hybrid joined or Microsoft Entra joined devices. On ADFS only usernamemixed endpoints are required. On ADFS if Smartcard/certificate is used during Windows sign-in certificatemixed endpoints are required. Both adfs/services/trust/2005/windowstransport and adfs/services/trust/13/windowstransport should be enabled as intranet facing endpoints only and **must NOT be exposed** as extranet facing endpoints through the Web Application Proxy. > [!NOTE] > Microsoft Entra Conditional Access policies are not evaluated when PRTs are issued. |
active-directory | Enterprise State Roaming Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/enterprise-state-roaming-troubleshooting.md | This section gives suggestions on how to troubleshoot and diagnose problems rela Enterprise State Roaming requires the device to be registered with Microsoft Entra ID. Although not specific to Enterprise State Roaming, using the following instructions can help confirm that the Windows 10 or newer Client is registered, and confirm thumbprint, Microsoft Entra settings URL, NGC status, and other information. 1. Open the command prompt unelevated. To do this in Windows, open the Run launcher (Win + R) and type ΓÇ£cmdΓÇ¥ to open.-1. Once the command prompt is open, type ΓÇ£*dsregcmd.exe /status*ΓÇ¥. -1. For expected output, the **AzureAdJoined** field value should be ΓÇ£YESΓÇ¥, **WamDefaultSet** field value should be ΓÇ£YESΓÇ¥, and the **WamDefaultGUID** field value should be a GUID with ΓÇ£(AzureAD)ΓÇ¥ at the end. +1. Once the command prompt is open, type `*dsregcmd.exe /status*`. +1. For expected output, the **AzureAdJoined** field value should be `YES`, **WamDefaultSet** field value should be `YES`, and the **WamDefaultGUID** field value should be a GUID with `(AzureAD)` at the end. **Potential issue**: **WamDefaultSet** and **AzureAdJoined** both have ΓÇ£NOΓÇ¥ in the field value, the device was domain-joined and registered with Microsoft Entra ID, and the device doesn't sync. If it's showing this, the device may need to wait for policy to be applied or the authentication for the device failed when connecting to Microsoft Entra ID. The user may have to wait a few hours for the policy to be applied. Other troubleshooting steps may include retrying autoregistration by signing out and back in, or launching the task in Task Scheduler. In some cases, running ΓÇ£*dsregcmd.exe /leave*ΓÇ¥ in an elevated command prompt window, rebooting, and trying registration again may help with this issue. |
active-directory | Howto Vm Sign In Azure Ad Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md | An Azure user who has the Owner or Contributor role assigned for a VM doesn't au There are two ways to configure role assignments for a VM: -- Microsoft Entra portal experience+- Microsoft Entra admin center experience - Azure Cloud Shell experience > [!NOTE] There are two ways to configure role assignments for a VM: <a name='azure-ad-portal'></a> -### Microsoft Entra portal +<a name='microsoft-entra-portal'></a> ++### Microsoft Entra admin center To configure role assignments for your Microsoft Entra ID-enabled Windows Server 2019 Datacenter VMs: Exit code -2145648607 translates to `DSREG_AUTOJOIN_DISC_FAILED`. The extension - `curl https://pas.windows.net/ -D -` > [!NOTE]- > Replace `<TenantID>` with the Azure AD tenant ID that's associated with the Azure subscription. If you need to find the tenant ID, you can hover over your account name or select **Identity** > **Overview** > **Properties** > **Tenant ID**. + > Replace `<TenantID>` with the Microsoft Entra tenant ID that's associated with the Azure subscription. If you need to find the tenant ID, you can hover over your account name or select **Identity** > **Overview** > **Properties** > **Tenant ID**. > > Attempts to connect to `enterpriseregistration.windows.net` might return 404 Not Found, which is expected behavior. Attempts to connect to `pas.windows.net` might prompt for PIN credentials or might return 404 Not Found. (You don't need to enter the PIN.) Either one is sufficient to verify that the URL is reachable. |
active-directory | Hybrid Join Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/hybrid-join-control.md | Use the following example to create a Group Policy Object (GPO) to deploy a regi 1. Key Path: **SOFTWARE\Microsoft\Windows\CurrentVersion\CDJ\AAD**. 1. Value name: **TenantId**. 1. Value type: **REG_SZ**.- 1. Value data: The GUID or **Tenant ID** of your Microsoft Entra instance (This value can be found in the **Microsoft Entra admin center** > **Identity** > **Properties** > **Tenant ID**). + 1. Value data: The GUID or **Tenant ID** of your Microsoft Entra tenant, which can be found in **Identity** > **Overview** > **Properties** > **Tenant ID**. 1. Select **OK**. 1. Right-click on the Registry and select **New** > **Registry Item**. 1. On the **General** tab, configure the following. |
active-directory | Hybrid Join Manual | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/hybrid-join-manual.md | In your forest, the SCP object for the autoregistration of domain-joined devices `CN=62a0ff2e-97b9-4513-943f-0d221bd30080,CN=Device Registration Configuration,CN=Services,[Your Configuration Naming Context]` Depending on how you have deployed Microsoft Entra Connect, the SCP object might have already been configured.-You can verify the existence of the object and retrieve the discovery values by using the following Windows PowerShell script: +You can verify the existence of the object and retrieve the discovery values by using the following PowerShell script: - ```PowerShell + ```powershell $scp = New-Object System.DirectoryServices.DirectoryEntry; $scp.Path = "LDAP://CN=62a0ff2e-97b9-4513-943f-0d221bd30080,CN=Device Registration Configuration,CN=Services,CN=Configuration,DC=fabrikam,DC=com"; |
active-directory | Hybrid Join Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/hybrid-join-plan.md | For devices running the Windows desktop operating system, supported versions are ### Windows down-level devices -- Windows 8.1-- Windows 7 support ended on January 14, 2020. For more information, see [Support for Windows 7 has ended](https://support.microsoft.com/en-us/help/4057281/windows-7-support-ended-on-january-14-2020) - Windows Server 2012 R2 - Windows Server 2012-- Windows Server 2008 R2 for support information on Windows Server 2008 and 2008 R2, see [Prepare for Windows Server 2008 end of support](https://www.microsoft.com/cloud-platform/windows-server-2008) As a first planning step, you should review your environment and determine whether you need to support Windows down-level devices. |
active-directory | Manage Stale Devices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/manage-stale-devices.md | To clean up Microsoft Entra ID: - **Windows 7/8** - Disable or delete Windows 7/8 devices in your on-premises AD first. You can't use Microsoft Entra Connect to disable or delete Windows 7/8 devices in Microsoft Entra ID. Instead, when you make the change in your on-premises, you must disable/delete in Microsoft Entra ID. > [!NOTE]-> - Deleting devices in your on-premises AD or Microsoft Entra ID does not remove registration on the client. It will only prevent access to resources using device as an identity (e.g. Conditional Access). Read additional information on how to [remove registration on the client](faq.yml). +> - Deleting devices in your on-premises Active Directory or Microsoft Entra ID does not remove registration on the client. It will only prevent access to resources using device as an identity (such as Conditional Access). Read additional information on how to [remove registration on the client](faq.yml). > - Deleting a Windows 10 or newer device only in Microsoft Entra ID will re-synchronize the device from your on-premises using Microsoft Entra Connect but as a new object in "Pending" state. A re-registration is required on the device. > - Removing the device from sync scope for Windows 10 or newer /Server 2016 devices will delete the Microsoft Entra device. Adding it back to sync scope will place a new object in "Pending" state. A re-registration of the device is required. > - If you are not using Microsoft Entra Connect for Windows 10 or newer devices to synchronize (e.g. ONLY using AD FS for registration), you must manage lifecycle similar to Windows 7/8 devices. |
active-directory | Troubleshoot Device Dsregcmd | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/troubleshoot-device-dsregcmd.md | The state is displayed only when the device is Microsoft Entra joined or Microso - **DeviceAuthStatus**: Performs a check to determine the device's health in Microsoft Entra ID. The health statuses are: * *SUCCESS* if the device is present and enabled in Microsoft Entra ID. * *FAILED. Device is either disabled or deleted* if the device is either disabled or deleted. For more information about this issue, see [Microsoft Entra device management FAQ](faq.yml#why-do-my-users-see-an-error-message-saying--your-organization-has-deleted-the-device--or--your-organization-has-disabled-the-device--on-their-windows-10-11-devices). - * *FAILED. ERROR* if the test was unable to run. This test requires network connectivity to Microsoft Entra ID. + * *FAILED. ERROR* if the test was unable to run. This test requires network connectivity to Microsoft Entra ID under the system context. > [!NOTE] > The **DeviceAuthStatus** field was added in the Windows 10 May 2021 update (version 21H1). |
active-directory | Troubleshoot Hybrid Join Windows Current | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/troubleshoot-hybrid-join-windows-current.md | Use Event Viewer to look for the log entries that are logged by the Microsoft En > [!NOTE] > The CloudAP plug-in logs error events in the operational logs, and it logs the info events in the analytics logs. The analytics and operational log events are both required to troubleshoot issues. -1. Event 1006 in the analytics logs denotes the start of the PRT acquisition flow, and event 1007 in the analytics logs denotes the end of the PRT acquisition flow. All events in the Microsoft Entra ID logs (analytics and operational) that are logged between events 1006 and 1007 were logged as part of the PRT acquisition flow. +1. Event 1006 in the analytics logs denotes the start of the PRT acquisition flow, and event 1007 in the analytics logs denotes the end of the PRT acquisition flow. All events in the Microsoft Entra logs (analytics and operational) that are logged between events 1006 and 1007 were logged as part of the PRT acquisition flow. 1. Event 1007 logs the final error code. |
active-directory | Troubleshoot Mac Sso Extension Plugin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/troubleshoot-mac-sso-extension-plugin.md | By default, only MSAL apps invoke the SSO Extension, and then in turn the Extens |**1** |**All Items**|Shows all types of credentials across Keychain Access| |**2** |**Keychain Search Bar**|Allows filtering by credential. To filter for the Microsoft Entra PRT type **`primaryrefresh`**| |**3** |**Kind**|Refers to the type of credential. The Microsoft Entra PRT credential is an **Application Password** credential type|- |**4** |**Account**|Displays the Microsoft Entra User Account, which owns the PRT in the format: **`UserObjectId.TenantId-login.windows.net`** | + |**4** |**Account**|Displays the Microsoft Entra user account, which owns the PRT in the format: **`UserObjectId.TenantId-login.windows.net`** | |**5** |**Where**|Displays the full name of the credential. The Microsoft Entra PRT credential begins with the following format: **`primaryrefreshtoken-29d9ed98-a469-4536-ade2-f981bc1d605`** The **29d9ed98-a469-4536-ade2-f981bc1d605** is the Application ID for the **Microsoft Authentication Broker** service, responsible for handling PRT acquisition requests| |**6** |**Modified**|Shows when the credential was last updated. For the Microsoft Entra PRT credential, anytime the credential is bootstrapped or updated by an interactive sign-on event it updates the date/timestamp| |**7** |**Keychain** |Indicates which Keychain the selected credential resides. The Microsoft Entra PRT credential resides in the **Local Items** or **iCloud** Keychain. When iCloud is enabled on the macOS device, the **Local Items** Keychain will become the **iCloud** keychain| |
active-directory | Troubleshoot Primary Refresh Token | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/troubleshoot-primary-refresh-token.md | +<!-- docutune:ignore AAD --> + On devices that are joined to Microsoft Entra ID or hybrid Microsoft Entra ID, the main component of authentication is the PRT. You obtain this token by signing in to Windows 10 by using Microsoft Entra credentials on a Microsoft Entra joined device for the first time. The PRT is cached on that device. For subsequent sign-ins, the cached token is used to let you use the desktop. As part of the process of locking and unlocking the device or signing in again to Windows, a background network authentication attempt is made one time every four hours to refresh the PRT. If problems occur that prevent refreshing the token, the PRT eventually expires. Expiration affects single sign-on (SSO) to Microsoft Entra resources. It also causes sign-in prompts to be shown. -If you suspect that a PRT problem exists, we recommend that you first collect Microsoft Entra ID logs, and follow the steps that are outlined in the troubleshooting checklist. Do this for any Microsoft Entra client issue first, ideally within a repro session. Complete this process before you file a support request. +If you suspect that a PRT problem exists, we recommend that you first collect Microsoft Entra logs, and follow the steps that are outlined in the troubleshooting checklist. Do this for any Microsoft Entra client issue first, ideally within a repro session. Complete this process before you file a support request. ## Troubleshooting checklist |
active-directory | Clean Up Unmanaged Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/clean-up-unmanaged-accounts.md | Some overtaken domains might not be updated. For example, a missing DNS TXT reco Use the sample application on [Azure-Samples/Remove-Unmanaged-Guests](https://github.com/Azure-Samples/Remove-Unmanaged-Guests). -## Reset redemption using MSIdentityTools PowerShell Module +## Reset redemption using `MSIdentityTools` PowerShell module -MSIdentityTools PowerShell Module is a collection of cmdlets and scripts, which you use in the Microsoft identity platform and Microsoft Entra ID. Use the cmdlets and scripts to augment PowerShell SDK capabilities. See, [microsoftgraph/msgraph-sdk-powershell](https://github.com/microsoftgraph/msgraph-sdk-powershell). +The `MSIdentityTools` PowerShell module is a collection of cmdlets and scripts, which you use in the Microsoft identity platform and Microsoft Entra ID. Use the cmdlets and scripts to augment PowerShell SDK capabilities. See, [microsoftgraph/msgraph-sdk-powershell](https://github.com/microsoftgraph/msgraph-sdk-powershell). Run the following cmdlets: |
active-directory | Domains Admin Takeover | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-admin-takeover.md | The key and templates aren't moved over when the unmanaged organization is in a Although RMS for individuals is designed to support Microsoft Entra authentication to open protected content, it doesn't prevent users from also protecting content. If users did protect content with the RMS for individuals subscription, and the key and templates weren't moved over, that content isn't accessible after the domain takeover. -### Microsoft Entra ID PowerShell cmdlets for the ForceTakeover option +### Azure AD PowerShell cmdlets for the ForceTakeover option You can see these cmdlets used in [PowerShell example](#powershell-example). |
active-directory | Groups Naming Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-naming-policy.md | Some administrator roles are exempted from these policies, across all group work ## Install PowerShell cmdlets -Be sure to uninstall any older version of the Azure Active Directory PowerShell for Graph Module for Windows PowerShell and install [Azure Active Directory PowerShell for Graph - Public Preview Release 2.0.0.137](https://www.powershellgallery.com/packages/AzureADPreview/2.0.0.137) before you run the PowerShell commands. +Be sure to uninstall any older version of the Azure Active Directory PowerShell for Graph module and install [Azure Active Directory PowerShell for Graph - Public Preview Release 2.0.0.137](https://www.powershellgallery.com/packages/AzureADPreview/2.0.0.137) before you run the PowerShell commands. 1. Open the Windows PowerShell app as an administrator. 2. Uninstall any previous version of AzureADPreview. |
active-directory | Groups Self Service Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-self-service-management.md | Groups created in | Security group default behavior | Microsoft 365 group defaul 2. Select **All groups** > **Groups**, and then select **General** settings. + > [!NOTE] + > This setting only restricts access of group information in **My Groups**. It does not restrict access to group information via other methods like Microsoft Graph API calls or the Entra Admin Center + ![Microsoft Entra groups general settings.](./media/groups-self-service-management/groups-settings-general.png) > [!NOTE] > In June 2024, the setting **Restrict users access to My Groups** will change to **Restrict users ability to see and edit security groups in My Groups.** If the setting is currently set to ΓÇÿYes,ΓÇÖ end users will be able to access My Groups in June 2024, but will not be able to see security groups. |
active-directory | Groups Settings Cmdlets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-settings-cmdlets.md | The cmdlets are part of the Azure Active Directory PowerShell V2 module. For ins ## Install PowerShell cmdlets -Be sure to uninstall any older version of the Azure Active Directory PowerShell for Graph Module for Windows PowerShell and install [Azure Active Directory PowerShell for Graph - Public Preview Release (later than 2.0.0.137)](https://www.powershellgallery.com/packages/AzureADPreview) before you run the PowerShell commands. +Be sure to uninstall any older version of the Azure Active Directory PowerShell for Graph module and install [Azure Active Directory PowerShell for Graph - Public Preview Release (later than 2.0.0.137)](https://www.powershellgallery.com/packages/AzureADPreview) before you run the PowerShell commands. 1. Open the Windows PowerShell app as an administrator. 2. Uninstall any previous version of AzureADPreview. |
active-directory | Allow Deny List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/allow-deny-list.md | If you switch from one policy to the other, this discards the existing policy co > [!Note] > The AzureADPreview Module is not a fully supported module as it is in preview. -To set the allow or blocklist by using PowerShell, you must install the preview version of the Azure AD PowerShell Module for Windows PowerShell. Specifically, install the AzureADPreview module version 2.0.0.98 or later. +To set the allow or blocklist by using PowerShell, you must install the preview version of the Azure AD PowerShell module. Specifically, install the AzureADPreview module version 2.0.0.98 or later. To check the version of the module (and see if it's installed): 1. Open Windows PowerShell as an elevated user (Run as Administrator). -2. Run the following command to see if you have any versions of the Azure AD PowerShell Module for Windows PowerShell installed on your computer: +2. Run the following command to see if you have any versions of the Azure AD PowerShell module installed on your computer: ```powershell Get-Module -ListAvailable AzureAD* |
active-directory | Auditing And Reporting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/auditing-and-reporting.md | -You can use access reviews to periodically verify whether guest users still need access to your resources. The **Access reviews** feature is available in **Microsoft Entra ID** under **External Identities** > **Access reviews**. You can also search for "access reviews" from **All services** in the Azure portal. To learn how to use access reviews, see [Manage guest access with Microsoft Entra access reviews](../governance/manage-guest-access-with-access-reviews.md). +You can use access reviews to periodically verify whether guest users still need access to your resources. The **Access reviews** feature is available in **Microsoft Entra ID** under **Identity Governance** > **Access reviews**. To learn how to use access reviews, see [Manage guest access with Microsoft Entra access reviews](../governance/manage-guest-access-with-access-reviews.md). ## Audit logs -The Microsoft Entra audit logs provide records of system and user activities, including activities initiated by guest users. To access audit logs, in **Microsoft Entra ID**, under **Monitoring**, select **Audit logs**. To access audit logs of one specific user, select **Microsoft Entra ID** > **Users** > select the user > **Audit logs**. +The Microsoft Entra audit logs provide records of system and user activities, including activities initiated by guest users. To access audit logs, in **Identity**, under **Monitoring & health**, select **Audit logs**. To access audit logs of one specific user, select **Identity** > **Users** > **All users** > select the user > **Audit logs**. :::image type="content" source="media/auditing-and-reporting/audit-log.png" alt-text="Screenshot showing an example of audit log output." lightbox="media/auditing-and-reporting/audit-log-large.png"::: |
active-directory | Authentication Conditional Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/authentication-conditional-access.md | -# Authentication and Conditional Access for External Identities +# Authentication and Conditional Access for External ID > [!TIP] > This article applies to B2B collaboration and B2B direct connect. If your tenant is configured for customer identity and access management, see [Security and governance in Microsoft Entra ID for customers](customers/concept-security-customers.md). |
active-directory | B2b Direct Connect Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-direct-connect-overview.md | -Microsoft Entra B2B direct connect is a feature of External Identities that lets you set up a mutual trust relationship with another Microsoft Entra organization for seamless collaboration. This feature currently works with Microsoft Teams shared channels. With B2B direct connect, users from both organizations can work together using their home credentials and a shared channel in Teams, without having to be added to each otherΓÇÖs organizations as guests. Use B2B direct connect to share resources with external Microsoft Entra organizations. Or use it to share resources across multiple Microsoft Entra tenants within your own organization. +B2B direct connect is a feature of Microsoft Entra External ID that lets you set up a mutual trust relationship with another Microsoft Entra organization for seamless collaboration. This feature currently works with Microsoft Teams shared channels. With B2B direct connect, users from both organizations can work together using their home credentials and a shared channel in Teams, without having to be added to each otherΓÇÖs organizations as guests. Use B2B direct connect to share resources with external Microsoft Entra organizations. Or use it to share resources across multiple Microsoft Entra tenants within your own organization. ![Diagram illustrating B2B direct connect](media/b2b-direct-connect-overview/b2b-direct-connect-overview.png) |
active-directory | B2b Fundamentals | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-fundamentals.md | -This article contains recommendations and best practices for business-to-business (B2B) collaboration in Microsoft Entra ID. +This article contains recommendations and best practices for business-to-business (B2B) collaboration in Microsoft Entra External ID. > [!IMPORTANT] > The [email one-time passcode feature](one-time-passcode.md) is now turned on by default for all new tenants and for any existing tenants where you haven't explicitly turned it off. When this feature is turned off, the fallback authentication method is to prompt invitees to create a Microsoft account. This article contains recommendations and best practices for business-to-busines | Recommendation | Comments | | | | | Consult Microsoft Entra guidance for securing your collaboration with external partners | Learn how to take a holistic governance approach to your organization's collaboration with external partners by following the recommendations in [Securing external collaboration in Microsoft Entra ID and Microsoft 365](../architecture/secure-external-access-resources.md). |-| Carefully plan your cross-tenant access and external collaboration settings | Microsoft Entra ID gives you a flexible set of controls for managing collaboration with external users and organizations. You can allow or block all collaboration, or configure collaboration only for specific organizations, users, and apps. Before configuring settings for cross-tenant access and external collaboration, take a careful inventory of the organizations you work and partner with. Then determine if you want to enable [B2B direct connect](b2b-direct-connect-overview.md) or [B2B collaboration](what-is-b2b.md) with other Microsoft Entra tenants, and how you want to manage [B2B collaboration invitations](external-collaboration-settings-configure.md). | +| Carefully plan your cross-tenant access and external collaboration settings | Microsoft Entra External ID gives you a flexible set of controls for managing collaboration with external users and organizations. You can allow or block all collaboration, or configure collaboration only for specific organizations, users, and apps. Before configuring settings for cross-tenant access and external collaboration, take a careful inventory of the organizations you work and partner with. Then determine if you want to enable [B2B direct connect](b2b-direct-connect-overview.md) or [B2B collaboration](what-is-b2b.md) with other Microsoft Entra tenants, and how you want to manage [B2B collaboration invitations](external-collaboration-settings-configure.md). | | Use tenant restrictions to control how external accounts are used on your networks and managed devices. | With tenant restrictions, you can prevent your users from using accounts they've created in unknown tenants or accounts they've received from external organizations. We recommend you disallow these accounts and use B2B collaboration instead. | | For an optimal sign-in experience, federate with identity providers | Whenever possible, federate directly with identity providers to allow invited users to sign in to your shared apps and resources without having to create Microsoft Accounts (MSAs) or Microsoft Entra accounts. You can use the [Google federation feature](google-federation.md) to allow B2B guest users to sign in with their Google accounts. Or, you can use the [SAML/WS-Fed identity provider (preview) feature](direct-federation.md) to set up federation with any organization whose identity provider (IdP) supports the SAML 2.0 or WS-Fed protocol. | | Use the Email one-time passcode feature for B2B guests who canΓÇÖt authenticate by other means | The [Email one-time passcode](one-time-passcode.md) feature authenticates B2B guest users when they can't be authenticated through other means like Microsoft Entra ID, a Microsoft account (MSA), or Google federation. When the guest user redeems an invitation or accesses a shared resource, they can request a temporary code, which is sent to their email address. Then they enter this code to continue signing in. | |
active-directory | B2b Quickstart Add Guest Users Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-quickstart-add-guest-users-portal.md | -#Customer intent: As a tenant admin, I want to walk through the B2B invitation workflow so that I can understand how to add a guest user in the portal, and understand the end user experience. +#Customer intent: As a tenant admin, I want to walk through the B2B invitation workflow so that I can understand how to add a guest user in the Microsoft Entra admin center, and understand the end user experience. # Quickstart: Add a guest user and send an invitation In this quickstart, you'll learn how to add a new guest user to your Microsoft E If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. -The updated experience for creating new users covered in this article is available as a Microsoft Entra ID preview feature. This feature is enabled by default, but you can opt out by going to **Microsoft Entra ID** > **Preview features** and disabling the **Create user experience** feature. For more information about previews, see [Universal License Terms for Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all). +The updated experience for creating new users covered in this article is available as a Microsoft Entra ID preview feature. This feature is enabled by default, but you can opt out by going to **Identity** > **Settings** > **Preview hub** and disabling the **Create user experience** feature. For more information about previews, see [Universal License Terms for Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all). Instructions for the legacy create user process can be found in the [Add or delete users](../fundamentals/add-users.md) article. Instructions for the legacy create user process can be found in the [Add or dele To complete the scenario in this quickstart, you need: -- A role that allows you to create users in your tenant directory, such as the Global Administrator role or a limited administrator directory role such as Guest Inviter or User Administrator.+- A role that allows you to create users in your tenant directory, such as at least a [Guest Inviter role](../roles/permissions-reference.md#guest-inviter) or a [User administrator](../roles/permissions-reference.md#user-administrator). - Access to a valid email address outside of your Microsoft Entra tenant, such as a separate work, school, or social email address. You'll use this email to create the guest account in your tenant directory and access the invitation. When no longer needed, delete the test guest user. In this quickstart, you created a guest user in the Microsoft Entra admin center and sent an invitation to share apps. Then you viewed the redemption process from the guest user's perspective, and verified that the guest user was able to access their My Apps page. To learn more about adding guest users for collaboration, see [Add Microsoft Entra B2B collaboration users in the Microsoft Entra admin center](add-users-administrator.md). To learn more about adding guest users with PowerShell, see [Add and invite guests with PowerShell](b2b-quickstart-invite-powershell.md).-You can also bulk invite guest users [via the portal](tutorial-bulk-invite.md) or [via PowerShell](bulk-invite-powershell.md). +You can also bulk invite guest users [via the admin center](tutorial-bulk-invite.md) or [via PowerShell](bulk-invite-powershell.md). |
active-directory | B2b Quickstart Invite Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-quickstart-invite-powershell.md | Title: 'Quickstart: Add a guest user with PowerShell' -description: In this quickstart, you learn how to use PowerShell to send an invitation to an external Microsoft Entra B2B collaboration user. You'll use the Microsoft Graph Identity Sign-ins and the Microsoft Graph Users PowerShell modules. +description: In this quickstart, you learn how to use PowerShell to send an invitation to a Microsoft Entra B2B collaboration user. You'll use the Microsoft Graph Identity Sign-ins and the Microsoft Graph Users PowerShell modules. Previously updated : 07/31/2023 Last updated : 09/22/2023 -#Customer intent: As a tenant admin, I want to walk through the B2B invitation workflow so that I can understand how to add a user through PowerShell. ++#Customer intent: As a tenant admin, I want to walk through the B2B invitation workflow so that I can understand how to add a user via PowerShell. # Quickstart: Add a guest user with PowerShell -There are many ways you can invite external partners to your apps and services with Microsoft Entra B2B collaboration. In the previous quickstart, you saw how to add guest users directly in the Azure portal. You can also use PowerShell to add guest users, either one at a time or in bulk. In this quickstart, youΓÇÖll use the New-MgInvitation command to add one guest user to your Azure tenant. +There are many ways you can invite external partners to your apps and services with Microsoft Entra B2B collaboration. In the previous quickstart, you saw how to add guest users directly in the Microsoft Entra admin center. You can also use PowerShell to add guest users, either one at a time or in bulk. In this quickstart, youΓÇÖll use the New-MgInvitation command to add one guest user to your Microsoft Entra tenant. If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites -### PowerShell Module -Install the [Microsoft Graph Identity Sign-ins module](/powershell/module/microsoft.graph.identity.signins/?view=graph-powershell-beta&preserve-view=true) (Microsoft.Graph.Identity.SignIns) and the [Microsoft Graph Users module](/powershell/module/microsoft.graph.users/?view=graph-powershell-beta&preserve-view=true) (Microsoft.Graph.Users). You can use the `#Requires` statement to prevent running a script unless the required PowerShell modules are met. ++To complete the scenario in this quickstart, you need: ++- A role that allows you to create users in your tenant directory, such as at least a [Guest Inviter role](../roles/permissions-reference.md#guest-inviter) or a [User administrator](../roles/permissions-reference.md#user-administrator). +- Install the [Microsoft Graph Identity Sign-ins module](/powershell/module/microsoft.graph.identity.signins/?view=graph-powershell-beta&preserve-view=true) (Microsoft.Graph.Identity.SignIns) and the [Microsoft Graph Users module](/powershell/module/microsoft.graph.users/?view=graph-powershell-beta&preserve-view=true) (Microsoft.Graph.Users). You can use the `#Requires` statement to prevent running a script unless the required PowerShell modules are met. ```powershell #Requires -Modules Microsoft.Graph.Identity.SignIns, Microsoft.Graph.Users ``` -### Get a test email account --You need a test email account that you can send the invitation to. The account must be from outside your organization. You can use any type of account, including a social account such as a gmail.com or outlook.com address. +- Get a test email account. You need a test email account that you can send the invitation to. The account must be from outside your organization. You can use any type of account, including a social account such as a gmail.com or outlook.com address. ## Sign in to your tenant When prompted, enter your credentials. ## Verify the user exists in the directory -1. To verify that the invited user was added to Microsoft Entra ID, run the following command (replace **john\@contoso.com** with your invited email): +1. To verify that the invited user was added to Microsoft Entra ID, run the following command (replace **john@contoso.com** with your invited email): ```powershell Get-MgUser -Filter "Mail eq 'John@contoso.com'" Remove-MgUser -UserId '3f80a75e-750b-49aa-a6b0-d9bf6df7b4c6' ## Next steps-In this quickstart, you invited and added a single guest user to your directory using PowerShell. You can also invite a guest user using the [Azure portal](b2b-quickstart-add-guest-users-portal.md). Additionally you can [invite guest users in bulk using PowerShell](tutorial-bulk-invite.md). +In this quickstart, you invited and added a single guest user to your directory using PowerShell. You can also invite a guest user using the [Microsoft Entra admin center](b2b-quickstart-add-guest-users-portal.md). Additionally you can [invite guest users in bulk using PowerShell](tutorial-bulk-invite.md). |
active-directory | B2b Sponsors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-sponsors.md | Title: Add sponsors to a guest user in the Microsoft Entra admin center - Microsoft Entra ID (preview) + Title: Add sponsors to a guest user in the Microsoft Entra admin center - External ID (preview) description: Shows how an admin can add sponsors to guest users in Microsoft Entra B2B collaboration. -# Customer intent: As a tenant administrator, I want to know how to add sponsors to guest users in Microsoft Entra ID. +# Customer intent: As a tenant administrator, I want to know how to add sponsors to guest users in Microsoft Entra External ID. # Sponsors field for B2B users (preview) |
active-directory | Claims Mapping | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/claims-mapping.md | -# B2B collaboration user claims mapping in Microsoft Entra ID +# B2B collaboration user claims mapping in Microsoft Entra External ID -Microsoft Entra ID supports customizing the claims that are issued in the SAML token for [B2B collaboration](what-is-b2b.md) users. When a user authenticates to the application, Microsoft Entra ID issues a SAML token to the app that contains information (or claims) about the user that uniquely identifies them. By default, this claim includes the user's user name, email address, first name, and last name. +With Microsoft Entra External ID, you can customize the claims that are issued in the SAML token for [B2B collaboration](what-is-b2b.md) users. When a user authenticates to the application, Microsoft Entra ID issues a SAML token to the app that contains information (or claims) about the user that uniquely identifies them. By default, this claim includes the user's user name, email address, first name, and last name. In the [Microsoft Entra admin center](https://entra.microsoft.com), you can view or edit the claims that are sent in the SAML token to the application. To access the settings, browse to **Identity** > **Applications** > **Enterprise applications** > the application that's configured for single sign-on > **Single sign-on**. See the SAML token settings in the **User Attributes** section. |
active-directory | Cross Tenant Access Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-overview.md | -Microsoft Entra organizations can use External Identities cross-tenant access settings to manage how they collaborate with other Microsoft Entra organizations and other Microsoft Azure clouds through B2B collaboration and [B2B direct connect](cross-tenant-access-settings-b2b-direct-connect.md). [Cross-tenant access settings](cross-tenant-access-settings-b2b-collaboration.md) give you granular control over how external Microsoft Entra organizations collaborate with you (inbound access) and how your users collaborate with external Microsoft Entra organizations (outbound access). These settings also let you trust multi-factor authentication (MFA) and device claims ([compliant claims and Microsoft Entra hybrid joined claims](../conditional-access/howto-conditional-access-policy-compliant-device.md)) from other Microsoft Entra organizations. +Microsoft Entra organizations can use External ID cross-tenant access settings to manage how they collaborate with other Microsoft Entra organizations and other Microsoft Azure clouds through B2B collaboration and [B2B direct connect](cross-tenant-access-settings-b2b-direct-connect.md). [Cross-tenant access settings](cross-tenant-access-settings-b2b-collaboration.md) give you granular control over how external Microsoft Entra organizations collaborate with you (inbound access) and how your users collaborate with external Microsoft Entra organizations (outbound access). These settings also let you trust multi-factor authentication (MFA) and device claims ([compliant claims and Microsoft Entra hybrid joined claims](../conditional-access/howto-conditional-access-policy-compliant-device.md)) from other Microsoft Entra organizations. This article describes cross-tenant access settings, which are used to manage B2B collaboration and B2B direct connect with external Microsoft Entra organizations, including across Microsoft clouds. More settings are available for B2B collaboration with non-Azure AD identities (for example, social identities or non-IT managed external accounts). These [external collaboration settings](external-collaboration-settings-configure.md) include options for restricting guest user access, specifying who can invite guests, and allowing or blocking domains. |
active-directory | Cross Tenant Access Settings B2b Collaboration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-settings-b2b-collaboration.md | Title: Configure B2B collaboration cross-tenant access -description: Use cross-tenant collaboration settings to manage how you collaborate with other Microsoft Entra organizations. Learn how to configure outbound access to external organizations and inbound access from external Microsoft Entra ID for B2B collaboration. +description: Use cross-tenant collaboration settings to manage how you collaborate with other Microsoft Entra organizations. Learn how to configure outbound access to external organizations and inbound access from external Microsoft Entra organizations for B2B collaboration. |
active-directory | Cross Tenant Access Settings B2b Direct Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-settings-b2b-direct-connect.md | Title: Configure B2B direct connect cross-tenant access -description: Use cross-tenant access settings to manage how you collaborate with other Microsoft Entra organizations. Learn how to configure outbound access to external organizations and inbound access from external Microsoft Entra ID for B2B direct connect. +description: Use cross-tenant access settings to manage how you collaborate with other Microsoft Entra organizations. Learn how to configure outbound access to external organizations and inbound access from external Microsoft Entra organizations for B2B direct connect. |
active-directory | How To Web App Node Use Certificate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-web-app-node-use-certificate.md | Microsoft Entra ID for customers supports two types of authentication for [confi In production, you should purchase a certificate signed by a well-known certificate authority, and use [Azure Key Vault](https://azure.microsoft.com/products/key-vault/) to manage certificate access and lifetime for you. However, for testing purposes, you can create a self-signed certificate and configure your apps to authenticate with it. -In this article, you learn to generate a self-signed certificate by using [Azure Key Vault](https://azure.microsoft.com/products/key-vault/) on the Azure portal, OpenSSL or Windows PowerShell. If you have a client secret already, you'll learn how to safely delete it. +In this article, you learn to generate a self-signed certificate by using [Azure Key Vault](https://azure.microsoft.com/products/key-vault/) on the Azure portal, OpenSSL, or PowerShell. If you have a client secret already, you'll learn how to safely delete it. When needed, you can also create a self-signed certificate programmatically by using [.NET](/azure/key-vault/certificates/quick-create-net), [Node.js](/azure/key-vault/certificates/quick-create-node), [Go](/azure/key-vault/certificates/quick-create-go), [Python](/azure/key-vault/certificates/quick-create-python) or [Java](/azure/key-vault/certificates/quick-create-java) client libraries. |
active-directory | Quickstart Get Started Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/quickstart-get-started-guide.md | In this quickstart, we'll guide you through customizing the look and feel of you ## Customize your sign-in experience +When you set up a customer tenant free trial, the guide will start automatically as part of the configuration of your new customer tenant. If you created your customer tenant with an Azure subscription, you can start the guide manually by following the steps below. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). +1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant. +1. Browse to **Home** > **Go to Microsoft Entra ID** +1. On the **Get started** tab, select **Start the guide**. ++ :::image type="content" source="media/how-to-create-customer-tenant-portal/guide-link.png" alt-text="Screenshot that shows how to start the guide."::: + You can customize your customer's sign-in and sign-up experience in the External ID for customers tenant. Follow the guide that will help you set up the tenant in three easy steps. First you must specify how would you like your customer to sign in. At this step you can choose between two options: **Email and password** or **Email and one-time passcode**. You can configure social accounts later, which would allow your customers to sign in using their [Google](how-to-google-federation-customers.md) or [Facebook](how-to-facebook-federation-customers.md) account. You can also [define custom attributes](how-to-define-custom-attributes.md) to collect from the user during sign-up. If you prefer, you can add your company logo, change the background color or adjust the sign-in layout. These optional changes will apply to the look and feel of all your apps in this tenant with customer configurations. After you have the created tenant, additional branding options are available. You can [customize the default branding](how-to-customize-branding-customers.md) and [add languages](how-to-customize-languages-customers.md). Once you're finished with the customization, select **Continue**. |
active-directory | Customize Invitation Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customize-invitation-api.md | -We've had many customers tell us that they want to customize the invitation process. [With our API](/graph/api/resources/invitation), you can customize the invitation process in a way that works best for your organization. +[With the Microsoft Graph REST API](/graph/api/resources/invitation), you can customize the invitation process in a way that works best for your organization. ## Capabilities of the invitation API |
active-directory | Default Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/default-account.md | -# Add Microsoft Entra ID as an identity provider for External Identities +# Add Microsoft Entra ID as an identity provider for External ID Microsoft Entra ID is available as an identity provider option for B2B collaboration by default. If an external guest user has a Microsoft Entra account through work or school, they can redeem your B2B collaboration invitations or complete your sign-up user flows using their Microsoft Entra account. |
active-directory | Direct Federation Adfs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/direct-federation-adfs.md | An AD FS server must already be set up and functioning before you begin this pro ## Configure AD FS for WS-Fed federation -Microsoft Entra B2B can be configured to federate with IdPs that use the WS-Fed protocol with the specific requirements listed below. Currently, the two WS-Fed providers have been tested for compatibility with Microsoft Entra External ID include AD FS and Shibboleth. Here, weΓÇÖll use Active Directory Federation Services (AD FS) as an example of the WS-Fed IdP. For more information about establishing a relying party trust between a WS-Fed compliant provider with Microsoft Entra External ID, download the Microsoft Azure AD Identity Provider Compatibility Docs. +Microsoft Entra B2B can be configured to federate with IdPs that use the WS-Fed protocol with the specific requirements listed below. Currently, the two WS-Fed providers have been tested for compatibility with Microsoft Entra External ID include AD FS and Shibboleth. Here, weΓÇÖll use Active Directory Federation Services (AD FS) as an example of the WS-Fed IdP. For more information about establishing a relying party trust between a WS-Fed compliant provider with Microsoft Entra External ID, download the Microsoft Entra identity provider compatibility docs. To set up federation, the following attributes must be received in the WS-Fed message from the IdP. These attributes can be configured by linking to the online security token service XML file or by entering them manually. Step 12 in [Create a test AD FS instance](https://medium.com/in-the-weeds/create-a-test-active-directory-federation-services-3-0-instance-on-an-azure-virtual-machine-9071d978e8ed) describes how to find the AD FS endpoints or how to generate your metadata URL, for example `https://fs.iga.azure-test.net/federationmetadata/2007-06/federationmetadata.xml`. |
active-directory | Direct Federation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/direct-federation.md | ->- *Direct federation* in Microsoft Entra ID is now referred to as *SAML/WS-Fed identity provider (IdP) federation*. +>- *Direct federation* in Microsoft Entra External ID is now referred to as *SAML/WS-Fed identity provider (IdP) federation*. This article describes how to set up federation with any organization whose identity provider (IdP) supports the SAML 2.0 or WS-Fed protocol. When you set up federation with a partner's IdP, new guest users from that domain can use their own IdP-managed organizational account to sign in to your Microsoft Entra tenant and start collaborating with you. There's no need for the guest user to create a separate Microsoft Entra account. |
active-directory | External Identities Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/external-identities-overview.md | Title: External Identities in Microsoft Entra ID + Title: Microsoft Entra External ID overview description: Microsoft Entra External ID allow you to collaborate with or publish apps to people outside your organization. Compare solutions for External Identities, including Microsoft Entra B2B collaboration, Microsoft Entra B2B collaboration, and Azure AD B2C. -# External Identities in Microsoft Entra ID +# Overview of Microsoft Entra External ID Microsoft Entra External ID refers to all the ways you can securely interact with users outside of your organization. If you want to collaborate with partners, distributors, suppliers, or vendors, you can share your resources and define how your internal users can access external organizations. If you're a developer creating consumer-facing apps, you can manage your customers' identity experiences. -With External Identities, external users can "bring their own identities." Whether they have a corporate or government-issued digital identity, or an unmanaged social identity like Google or Facebook, they can use their own credentials to sign in. The external userΓÇÖs identity provider manages their identity, and you manage access to your apps with Microsoft Entra ID or Azure AD B2C to keep your resources protected. +With External ID, external users can "bring their own identities." Whether they have a corporate or government-issued digital identity, or an unmanaged social identity like Google or Facebook, they can use their own credentials to sign in. The external userΓÇÖs identity provider manages their identity, and you manage access to your apps with Microsoft Entra ID or Azure AD B2C to keep your resources protected. The following capabilities make up External Identities: You can use [cross-tenant access settings](cross-tenant-access-overview.md) to m ## B2B direct connect -B2B direct connect is a new way to collaborate with other Microsoft Entra organizations. This feature currently works with Microsoft Teams shared channels. With B2B direct connect, you create two-way trust relationships with other Microsoft Entra organizations to allow users to seamlessly sign in to your shared resources and vice versa. B2B direct connect users aren't added as guests to your Microsoft Entra directory. When two organizations mutually enable B2B direct connect, users authenticate in their home organization and receive a token from the resource organization for access. Learn more about [B2B direct connect in Microsoft Entra ID](b2b-direct-connect-overview.md). +B2B direct connect is a new way to collaborate with other Microsoft Entra organizations. This feature currently works with Microsoft Teams shared channels. With B2B direct connect, you create two-way trust relationships with other Microsoft Entra organizations to allow users to seamlessly sign in to your shared resources and vice versa. B2B direct connect users aren't added as guests to your Microsoft Entra directory. When two organizations mutually enable B2B direct connect, users authenticate in their home organization and receive a token from the resource organization for access. Learn more about [B2B direct connect in Microsoft Entra External ID](b2b-direct-connect-overview.md). Currently, B2B direct connect enables the Teams Connect shared channels feature, which lets your users collaborate with external users from multiple organizations with a Teams shared channel for chat, calls, file-sharing, and app-sharing. Once youΓÇÖve set up B2B direct connect with an external organization, the following Teams shared channels capabilities become available: Azure AD B2C is a Customer Identity and Access Management (CIAM) solution that l With Azure AD B2C, customers can sign in with an identity they've already established (like Facebook or Gmail). You can completely customize and control how customers sign up, sign in, and manage their profiles when using your applications. -Although Azure AD B2C is built on the same technology as Microsoft Entra ID, it's a separate service with some feature differences. For more information about how an Azure AD B2C tenant differs from a Microsoft Entra tenant, see [Supported Microsoft Entra features](../../active-directory-b2c/supported-azure-ad-features.md) in the [Azure AD B2C documentation](../../active-directory-b2c/index.yml). +Although Azure AD B2C is built on the same technology as Microsoft Entra External ID, it's a separate service with some feature differences. For more information about how an Azure AD B2C tenant differs from a Microsoft Entra tenant, see [Supported Microsoft Entra features](../../active-directory-b2c/supported-azure-ad-features.md) in the [Azure AD B2C documentation](../../active-directory-b2c/index.yml). ## Comparing External Identities feature sets The following table gives a detailed comparison of the scenarios you can enable | **Single sign-on (SSO)** | SSO to all Microsoft Entra connected apps is supported. For example, you can provide access to Microsoft 365 or on-premises apps, and to other SaaS apps such as Salesforce or Workday. | SSO to a Teams shared channel. | SSO to customer owned apps within the Azure AD B2C tenants is supported. SSO to Microsoft 365 or to other Microsoft SaaS apps isn't supported. | | **Licensing and billing** | Based on monthly active users (MAU), including B2B collaboration and Azure AD B2C users. Learn more about [External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/) and [billing setup for B2B](external-identities-pricing.md). | Based on monthly active users (MAU), including B2B collaboration, B2B direct connect, and Azure AD B2C users. Learn more about [External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/) and [billing setup for B2B](external-identities-pricing.md). | Based on monthly active users (MAU), including B2B collaboration and Azure AD B2C users. Learn more about [External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/) and [billing setup for Azure AD B2C](../../active-directory-b2c/billing.md). | | **Security policy and compliance** | Managed by the host/inviting organization (for example, with [Conditional Access policies](authentication-conditional-access.md) and cross-tenant access settings). | Managed by the host/inviting organization (for example, with [Conditional Access policies](authentication-conditional-access.md) and cross-tenant access settings). See also the [Teams documentation](/microsoftteams/security-compliance-overview). | Managed by the organization via [Conditional Access and Identity Protection](../../active-directory-b2c/conditional-access-identity-protection-overview.md). |-| **multifactor authentication** | If inbound trust settings to accept MFA claims from the user's home tenant are configured, and MFA policies have already been met in the user's home tenant, the external user can sign in. If MFA trust isn't enabled, the user is presented with an MFA challenge from the resource organization. [Learn more](authentication-conditional-access.md#mfa-for-azure-ad-external-users) about MFA for Microsoft Entra external users. | If inbound trust settings to accept MFA claims from the user's home tenant are configured, and MFA policies have already been met in the user's home tenant, the external user can sign in. If MFA trust isn't enabled, and Conditional Access policies require MFA, the user is blocked from accessing resources. You *must* configure your inbound trust settings to accept MFA claims from the organization. [Learn more](authentication-conditional-access.md#mfa-for-azure-ad-external-users) about MFA for Microsoft Entra external users. | [Integrates directly](../../active-directory-b2c/multi-factor-authentication.md) with Microsoft Entra multifactor authentication. | +| **Multifactor authentication** | If inbound trust settings to accept MFA claims from the user's home tenant are configured, and MFA policies have already been met in the user's home tenant, the external user can sign in. If MFA trust isn't enabled, the user is presented with an MFA challenge from the resource organization. [Learn more](authentication-conditional-access.md#mfa-for-azure-ad-external-users) about MFA for Microsoft Entra external users. | If inbound trust settings to accept MFA claims from the user's home tenant are configured, and MFA policies have already been met in the user's home tenant, the external user can sign in. If MFA trust isn't enabled, and Conditional Access policies require MFA, the user is blocked from accessing resources. You *must* configure your inbound trust settings to accept MFA claims from the organization. [Learn more](authentication-conditional-access.md#mfa-for-azure-ad-external-users) about MFA for Microsoft Entra external users. | [Integrates directly](../../active-directory-b2c/multi-factor-authentication.md) with Microsoft Entra multifactor authentication. | | **Microsoft cloud settings** | [Supported.](cross-cloud-settings.md) | [Not supported.](cross-cloud-settings.md) | Not applicable. | | **Entitlement management** | [Supported.](../governance/entitlement-management-overview.md) | Not supported. | Not applicable. | | **Line-of-business (LOB) apps** | Supported. | Not supported. Only B2B direct connect-enabled apps can be shared (currently, Teams Connect shared channels). | Works with [RESTful API](../../active-directory-b2c/technical-overview.md#add-your-own-business-logic-and-call-restful-apis). | Based on your organizationΓÇÖs requirements you might use cross-tenant synchroni ## Managing External Identities features -Microsoft Entra B2B collaboration and B2B direct connect are features Microsoft Entra ID, and they're managed in the Azure portal through the Microsoft Entra service. To control inbound and outbound collaboration, you can use a combination of *cross-tenant access settings* and *external collaboration settings*. +Microsoft Entra B2B collaboration and B2B direct connect are features of Microsoft Entra External ID, and they're managed in the Azure portal through the Microsoft Entra service. To control inbound and outbound collaboration, you can use a combination of *cross-tenant access settings* and *external collaboration settings*. ### Cross-tenant access settings To set up B2B collaboration between tenants in different clouds, both tenants ne External collaboration settings determine whether your users can send B2B collaboration invitations to external users and the level of access guest users have to your directory. With these settings, you can: -- **Determine guest user permissions**. Microsoft Entra ID allows you to restrict what external guest users can see in your Microsoft Entra directory. For example, you can limit guest users' view of group memberships, or allow guests to view only their own profile information.+- **Determine guest user permissions**. Control what external guest users can see in your Microsoft Entra directory. For example, you can limit guest users' view of group memberships, or allow guests to view only their own profile information. - **Specify who can invite guests**. By default, all users in your organization, including B2B collaboration guest users, can invite external users to B2B collaboration. If you want to limit the ability to send invitations, you can turn invitations on or off for everyone, or limit invitations to certain roles. |
active-directory | Invite Internal Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/invite-internal-users.md | You can use the Microsoft Entra admin center, PowerShell, or the invitation API ## Use PowerShell to send a B2B invitation -You'll need Azure AD PowerShell module version 2.0.2.130 or later. Use the following command to update to the latest AzureAD PowerShell module and invite the internal user to B2B collaboration: +You'll need Azure AD PowerShell module version 2.0.2.130 or later. Use the following command to update to the latest module and invite the internal user to B2B collaboration: ```powershell Uninstall-Module AzureAD |
active-directory | Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/troubleshoot.md | You can enable this feature by using the setting 'ShowPeoplePickerSuggestionsFor By default, SharePoint Online and OneDrive have their own set of external user options and don't use the settings from Microsoft Entra ID. You need to enable [SharePoint and OneDrive integration with Microsoft Entra B2B](/sharepoint/sharepoint-azureb2b-integration-preview) to ensure the options are consistent among those applications. ## Invitations have been disabled for directory -If you're notified that you don't have permissions to invite users, verify that your user account is authorized to invite external users under Microsoft Entra ID > User settings > External users > Manage external collaboration settings: +If you're notified that you don't have permissions to invite users, verify that your user account is authorized to invite external users under Identity > Users > User settings > External users > Manage external collaboration settings: :::image type="content" source="media/troubleshoot/external-user-settings.png" alt-text="Screenshot showing the External User settings."::: Rarely, you might see this message: ΓÇ£This action can't be completed because th <a name='i-receive-the-error-that-azure-ad-cant-find-the-aad-extensions-app-in-my-tenant'></a> -## I receive the error that Microsoft Entra ID can't find the aad-extensions-app in my tenant +## I receive the error that Microsoft Entra ID can't find the `aad-extensions-app` in my tenant When you're using self-service sign-up features, like custom user attributes or user flows, an app called `aad-extensions-app. Do not modify. Used by AAD for storing user data.` is automatically created. It's used by Microsoft Entra External ID to store information about users who sign up and custom attributes collected. |
active-directory | Tutorial Bulk Invite | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/tutorial-bulk-invite.md | Check to see that the guest users you added exist in the directory either in the ### View guest users with PowerShell -To view guest users with PowerShell, you'll need the [Microsoft.Graph.Users PowerShell Module](/powershell/module/microsoft.graph.users/?view=graph-powershell-beta&preserve-view=true). Then sign in using the `Connect-MgGraph` command with an admin account to consent to the required scopes: +To view guest users with PowerShell, you'll need the [`Microsoft.Graph.Users` PowerShell module](/powershell/module/microsoft.graph.users/?view=graph-powershell-beta&preserve-view=true). Then sign in using the `Connect-MgGraph` command with an admin account to consent to the required scopes: ```powershell Connect-MgGraph -Scopes "User.Read.All" ``` |
active-directory | What Is B2b | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/what-is-b2b.md | -Microsoft Entra B2B collaboration is a feature within External Identities that lets you invite guest users to collaborate with your organization. With B2B collaboration, you can securely share your company's applications and services with external users, while maintaining control over your own corporate data. Work safely and securely with external partners, large or small, even if they don't have Microsoft Entra ID or an IT department. +B2B collaboration is a feature within Microsoft Entra External ID that lets you invite guest users to collaborate with your organization. With B2B collaboration, you can securely share your company's applications and services with external users, while maintaining control over your own corporate data. Work safely and securely with external partners, large or small, even if they don't have Microsoft Entra ID or an IT department. ![Diagram illustrating B2B collaboration.](media/what-is-b2b/b2b-collaboration-overview.png) |
active-directory | Create New Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/create-new-tenant.md | -You can do all of your administrative tasks using the Microsoft Entra portal, including creating a new tenant for your organization. +You can do all of your administrative tasks using the Microsoft Entra admin center, including creating a new tenant for your organization. In this quickstart, you'll learn how to get to the Azure portal and Microsoft Entra ID, and you'll learn how to create a basic tenant for your organization. |
active-directory | How To Get Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-get-support.md | Microsoft Q&A is Azure's recommended source for community support. We recommend ||| | Microsoft Authentication Library (MSAL) | [[`msal`]](/answers/topics/azure-ad-msal.html) | | Open Web Interface for .NET (OWIN) middleware | [[`azure-active-directory`]](/answers/topics/azure-active-directory.html) |-| [Azure AD B2B / External Identities](../external-identities/what-is-b2b.md) | [[`azure-ad-b2b`]](/answers/topics/azure-ad-b2b.html) | +| [Microsoft Entra B2B / External Identities](../external-identities/what-is-b2b.md) | [[`azure-ad-b2b`]](/answers/topics/azure-ad-b2b.html) | | [Azure AD B2C](https://azure.microsoft.com/services/active-directory-b2c/) | [[`azure-ad-b2c`]](/answers/topics/azure-ad-b2c.html) | | [Microsoft Graph API](https://developer.microsoft.com/graph/) | [[`azure-ad-graph`]](/answers/topics/azure-ad-graph.html) | | All other authentication and authorization areas | [[`azure-active-directory`]](/answers/topics/azure-active-directory.html) | |
active-directory | Security Defaults | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-defaults.md | After you enable security defaults in your tenant, any user accessing the follow - Azure PowerShell - Azure CLI -This policy applies to all users who are accessing Azure Resource Manager services, whether they're an administrator or a user. +This policy applies to all users who are accessing Azure Resource Manager services, whether they're an administrator or a user. This applies to ARM APIs such as accessing your subscription, VMs, storage accounts etc. This does not include Microsoft Entra ID or Microsoft Graph. > [!NOTE] > Pre-2017 Exchange Online tenants have modern authentication disabled by default. In order to avoid the possibility of a login loop while authenticating through these tenants, you must [enable modern authentication](/exchange/clients-and-mobile-in-exchange-online/enable-or-disable-modern-authentication-in-exchange-online). |
active-directory | Users Default Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/users-default-permissions.md | You can restrict default permissions for member users in the following ways: | **Create Microsoft 365 groups** | Setting this option to **No** prevents users from creating Microsoft 365 groups. Setting this option to **Some** allows a set of users to create Microsoft 365 groups. Global Administrators and User Administrators can still create Microsoft 365 groups. To learn how, see [Microsoft Entra cmdlets for configuring group settings](../enterprise-users/groups-settings-cmdlets.md). | | **Restrict access to Microsoft Entra administration portal** | **What does this switch do?** <br>**No** lets non-administrators browse the Microsoft Entra administration portal. <br>**Yes** Restricts non-administrators from browsing the Microsoft Entra administration portal. Non-administrators who are owners of groups or applications are unable to use the Azure portal to manage their owned resources. </p><p></p><p>**What does it not do?** <br> It doesn't restrict access to Microsoft Entra data using PowerShell, Microsoft GraphAPI, or other clients such as Visual Studio. <br>It doesn't restrict access as long as a user is assigned a custom role (or any role). </p><p></p><p>**When should I use this switch?** <br>Use this option to prevent users from misconfiguring the resources that they own. </p><p></p><p>**When should I not use this switch?** <br>Don't use this switch as a security measure. Instead, create a Conditional Access policy that targets Microsoft Azure Management that blocks non-administrators access to [Microsoft Azure Management](../conditional-access/concept-conditional-access-cloud-apps.md#microsoft-azure-management). </p><p></p><p> **How do I grant only a specific non-administrator users the ability to use the Microsoft Entra administration portal?** <br> Set this option to **Yes**, then assign them a role like global reader. </p><p></p><p>**Restrict access to the Microsoft Entra administration portal** <br>A Conditional Access policy that targets Microsoft Azure Management targets access to all Azure management. | | **Restrict non-admin users from creating tenants** | Users can create tenants in the Microsoft Entra ID and Microsoft Entra administration portal under Manage tenant. The creation of a tenant is recorded in the Audit log as category DirectoryManagement and activity Create Company. Anyone who creates a tenant becomes the Global Administrator of that tenant. The newly created tenant doesn't inherit any settings or configurations. </p><p></p><p>**What does this switch do?** <br> Setting this option to **Yes** restricts creation of Microsoft Entra tenants to the Global Administrator or tenant creator roles. Setting this option to **No** allows non-admin users to create Microsoft Entra tenants. Tenant create will continue to be recorded in the Audit log. </p><p></p><p>**How do I grant only a specific non-administrator users the ability to create new tenants?** <br> Set this option to Yes, then assign them the tenant creator role.|-| **Restrict users from recovering the BitLocker key(s) for their owned devices** | This setting can be found in the Microsoft Entra ID and Microsoft Entra portal in the Device Settings. Setting this option to **Yes** restricts users from being able to self-service recover BitLocker key(s) for their owned devices. Users will have to contact their organization's helpdesk to retrieve their BitLocker keys. Setting this option to **No** allows users to recover their BitLocker key(s). | +| **Restrict users from recovering the BitLocker key(s) for their owned devices** | This setting can be found in the Microsoft Entra admin center in the Device Settings. Setting this option to **Yes** restricts users from being able to self-service recover BitLocker key(s) for their owned devices. Users will have to contact their organization's helpdesk to retrieve their BitLocker keys. Setting this option to **No** allows users to recover their BitLocker key(s). | | **Read other users** | This setting is available in Microsoft Graph and PowerShell only. Setting this flag to `$false` prevents all non-admins from reading user information from the directory. This flag doesn't prevent reading user information in other Microsoft services like Exchange Online.</p><p>This setting is meant for special circumstances, so we don't recommend setting the flag to `$false`. | The **Restrict non-admin users from creating tenants** option is shown [below](https://portal.azure.com/#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/UserSettings) |
active-directory | Configure Logic App Lifecycle Workflows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/configure-logic-app-lifecycle-workflows.md | Title: Configure a Logic App for Lifecycle Workflow use -description: Configure an Azure Logic App for use with Lifecycle Workflows +description: Configure an Azure Logic App for use with Lifecycle Workflows Last updated 06/22/2023 -- # Configure a Logic App for Lifecycle Workflow use Before you can use an existing Azure Logic App with the custom task extension feature of Lifecycle Workflows, it must first be made compatible. This reference guide provides a list of steps that must be taken to make the Azure Logic App compatible. For a guide on creating a new compatible Logic App via the Lifecycle Workflows portal, see [Trigger Logic Apps based on custom task extensions](trigger-custom-task.md). Before configuring your Azure Logic App custom extension for use with Lifecycle - Normal - Proof of Possession(POP) - To determine the security token type of your custom task extension, you'd check the **Custom extensions** page: :::image type="content" source="media/configure-logic-app-lifecycle-workflows/custom-task-extension-token-type.png" alt-text="Screenshot of custom task extension and token type."::: - > [!NOTE] > New custom task extensions will only have Proof of Possession(POP) token security type. Only task extensions created before the inclusion of the Proof of Possession token security type will have a type of Normal. To configure those you follow these steps: 1. On the left of the screen, select **Logic App code view**. 1. In the editor paste the following code:- ```LCW Logic App code view template ++ ```json { "definition": { "$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#", To configure those you follow these steps: }, "parameters": {} }- ```+ 1. Select Save. 1. Switch to the **Logic App designer** and inspect the configured trigger and callback action. To build your custom business logic, add other actions between the trigger and callback action. If you're only interested in the fire-and-forget scenario, you may remove the callback action. -1. On the left of the screen, select **Identity**. +1. On the left of the screen, select **Identity**. 1. Under the system assigned tab, enable the status to register it with Microsoft Entra ID. -1. Select Save. +1. Select Save. ## Configure authorization policy for custom task extension with POP security token type If the security token type is **Proof of Possession (POP)** for your custom task extension, you'd set the authorization policy by following these steps: -1. For Logic Apps authorization policy, we need the managed identities **Application ID**. Since the Microsoft Entra admin center only shows the Object ID, we need to look up the Application ID. You can search for the managed identity by Object ID under **Enterprise Applications in the Microsoft Entra portal** to find the required Application ID. +1. For Logic Apps authorization policy, we need the managed identities **Application ID**. Since the Microsoft Entra admin center only shows the Object ID, we need to look up the Application ID. You can search for the managed identity by Object ID under **Enterprise Applications** in the Microsoft Entra admin center to find the required Application ID. 1. Go back to the logic app you created, and select **Authorization**. 1. Create two authorization policies based on these tables: - Policy name: POP-Policy - - Policy type: AADPOP + Policy name: `POP-Policy` ++ Policy type: `AADPOP` |Claim |Value | ||| If the security token type is **Proof of Possession (POP)** for your custom task |u | management.azure.com | |p | /subscriptions/(subscriptionId)/resourceGroups/(resourceGroupName)/providers/Microsoft.Logic/workflows/(LogicApp name) | - 1. Save the Authorization policy. - > [!CAUTION] > Please pay attention to the details as minor differences can lead to problems later.-- For Issuer, ensure you did include the slash after your Tenant ID-- For appid, ensure the custom claim is ΓÇ£appidΓÇ¥ in all lowercase. The appid value represents Lifecycle Workflows and is always the same.++- For `Issuer`, ensure you included the slash after your Tenant ID +- For `appid`, ensure the custom claim is `appid` in all lowercase. The `appid` value represents Lifecycle Workflows and is always the same. ## Configure authorization policy for custom task extension with normal security token type If the security token type is **Normal** for your custom task extension, you'd set the authorization policy by following these steps: -1. For Logic Apps authorization policy, we need the managed identities **Application ID**. Since the Microsoft Entra admin center only shows the Object ID, we need to look up the Application ID. You can search for the managed identity by Object ID under **Enterprise Applications in the Microsoft Entra portal** to find the required Application ID. +1. For Logic Apps authorization policy, we need the managed identities **Application ID**. Since the Microsoft Entra admin center only shows the Object ID, we need to look up the Application ID. You can search for the managed identity by Object ID under **Enterprise Applications** in the Microsoft Entra admin center to find the required Application ID. 1. Go back to the logic app you created, and select **Authorization**. 1. Create two authorization policies based on these tables: - Policy name: AzureADLifecycleWorkflowsAuthPolicy + Policy name: `AzureADLifecycleWorkflowsAuthPolicy` - Policy type: AAD + Policy type: `AAD` |Claim |Value | ||| If the security token type is **Normal** for your custom task extension, you'd s |Audience | Application ID of your Logic Apps Managed Identity | |appid | ce79fdc4-cd1d-4ea5-8139-e74d7dbe0bb7 | - Policy name: AzureADLifecycleWorkflowsAuthPolicyV2App + Policy name: `AzureADLifecycleWorkflowsAuthPolicyV2App` - Policy type: AAD + Policy type: `AAD` |Claim |Value | ||| If the security token type is **Normal** for your custom task extension, you'd s > [!CAUTION] > Please pay attention to the details as minor differences can lead to problems later.-- For Issuer, ensure you did include the slash after your Tenant ID-- For Audience, ensure you're using the Application ID and not the Object ID of your Managed Identity-- For appid, ensure the custom claim is ΓÇ£appidΓÇ¥ in all lowercase. The appid value represents Lifecycle Workflows and is always the same.++- For `Issuer`, ensure you includes the slash after your Tenant ID. +- For Audience, ensure you're using the Application ID and not the Object ID of your Managed Identity. +- For `appid`, ensure the custom claim is `appid` in all lowercase. The `appid` value represents Lifecycle Workflows and is always the same. ## Using the Logic App with Lifecycle Workflows |
active-directory | Entitlement Management Logs And Reporting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-logs-and-reporting.md | Archiving Microsoft Entra audit logs requires you to have Azure Monitor in an Az 1. Check if there's already a setting to send the audit logs to that workspace. -1. If there isn't already a setting, select **Add diagnostic setting**. Use the instructions in [Integrate Microsoft Entra ID logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md) to send the Microsoft Entra audit log to the Azure Monitor workspace. +1. If there isn't already a setting, select **Add diagnostic setting**. Use the instructions in [Integrate Microsoft Entra logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md) to send the Microsoft Entra audit log to the Azure Monitor workspace. ![Diagnostics settings pane](./media/entitlement-management-logs-and-reporting/audit-log-diagnostics-settings.png) $wks = Get-AzOperationalInsightsWorkspace ### Retrieve Log Analytics ID with multiple Azure subscriptions - [Get-AzOperationalInsightsWorkspace](/powershell/module/Az.OperationalInsights/Get-AzOperationalInsightsWorkspace) operates in one subscription at a time. So, if you have multiple Azure subscriptions, you want to make sure you connect to the one that has the Log Analytics workspace with the Microsoft Entra ID logs. + [Get-AzOperationalInsightsWorkspace](/powershell/module/Az.OperationalInsights/Get-AzOperationalInsightsWorkspace) operates in one subscription at a time. So, if you have multiple Azure subscriptions, you want to make sure you connect to the one that has the Log Analytics workspace with the Microsoft Entra logs. The following cmdlets display a list of subscriptions, and find the ID of the subscription that has the Log Analytics workspace: $subs | ft You can reauthenticate and associate your PowerShell session to that subscription using a command such as `Connect-AzAccount ΓÇôSubscription $subs[0].id`. To learn more about how to authenticate to Azure from PowerShell, including non-interactively, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps). -If you have multiple Log Analytics workspaces in that subscription, then the cmdlet [Get-AzOperationalInsightsWorkspace](/powershell/module/Az.OperationalInsights/Get-AzOperationalInsightsWorkspace) returns the list of workspaces. Then you can find the one that has the Microsoft Entra ID logs. The `CustomerId` field returned by this cmdlet is the same as the value of the "Workspace ID" displayed in the Microsoft Entra admin center in the Log Analytics workspace overview. +If you have multiple Log Analytics workspaces in that subscription, then the cmdlet [Get-AzOperationalInsightsWorkspace](/powershell/module/Az.OperationalInsights/Get-AzOperationalInsightsWorkspace) returns the list of workspaces. Then you can find the one that has the Microsoft Entra logs. The `CustomerId` field returned by this cmdlet is the same as the value of the "Workspace ID" displayed in the Microsoft Entra admin center in the Log Analytics workspace overview. ```powershell $wks = Get-AzOperationalInsightsWorkspace |
active-directory | How To Lifecycle Workflow Sync Attributes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/how-to-lifecycle-workflow-sync-attributes.md | The EmployeeHireDate and EmployeeLeaveDateTime contain dates and times that must |SuccessFactors to Active Directory User Provisioning|FormatDateTime([endDate], ,"M/d/yyyy hh:mm:ss tt","yyyyMMddHHmmss.fZ")|On-premises AD string attribute|[Attribute mappings for SAP Success Factors](../saas-apps/sap-successfactors-inbound-provisioning-tutorial.md)| |Custom import to Active Directory|Must be in the format "yyyyMMddHHmmss.fZ"|On-premises AD string attribute|| |Microsoft Graph User API|Must be in the format "YYYY-MM-DDThh:mm:ssZ"|EmployeeHireDate and EmployeeLeaveDateTime||-|Workday to Microsoft Entra User Provisioning|Can use a direct mapping. No expression is needed but may be used to adjust the time portion of EmployeeHireDate and EmployeeLeaveDateTime|EmployeeHireDate and EmployeeLeaveDateTime|| -|SuccessFactors to Microsoft Entra User Provisioning|Can use a direct mapping. No expression is needed but may be used to adjust the time portion of EmployeeHireDate and EmployeeLeaveDateTime|EmployeeHireDate and EmployeeLeaveDateTime|| +|Workday to Microsoft Entra user provisioning|Can use a direct mapping. No expression is needed but may be used to adjust the time portion of EmployeeHireDate and EmployeeLeaveDateTime|EmployeeHireDate and EmployeeLeaveDateTime|| +|SuccessFactors to Microsoft Entra user provisioning|Can use a direct mapping. No expression is needed but may be used to adjust the time portion of EmployeeHireDate and EmployeeLeaveDateTime|EmployeeHireDate and EmployeeLeaveDateTime|| For more information on expressions, see [Reference for writing expressions for attribute mappings in Microsoft Entra ID](../app-provisioning/functions-for-customizing-application-data.md) |
active-directory | How To Inbound Synch Ms Graph | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/how-to-inbound-synch-ms-graph.md | The structure of how to do this consists of the following steps. They are: - [Review status](#review-status) - [Next steps](#next-steps) -Use these [Azure AD PowerShell Module for Windows PowerShell](/powershell/module/msonline/) commands to enable synchronization for a production tenant, a prerequisite for being able to call the Administration Web Service for that tenant. +Use these [Azure AD PowerShell module](/powershell/module/msonline/) commands to enable synchronization for a production tenant, a prerequisite for being able to call the Administration Web Service for that tenant. ## Basic setup |
active-directory | How To Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/how-to-troubleshoot.md | When you troubleshoot agent problems, you verify that the agent was installed co You can verify these items in the portal and on the local server that's running the agent. -<a name='entra-portal-agent-verification'></a> --### Microsoft Entra portal agent verification +### Microsoft Entra admin center agent verification [!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)] |
active-directory | How To Bypassdirsyncoverrides | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-bypassdirsyncoverrides.md | Clear-ADSyncToolsDirSyncOverridesUser 'User1@Contoso.com' -MobilePhoneInAAD -Alt ## Next Steps -Learn more about [Microsoft Entra Connect: ADSyncTools PowerShell Module](reference-connect-adsynctools.md) +Learn more about [Microsoft Entra Connect: `ADSyncTools` PowerShell module](reference-connect-adsynctools.md) |
active-directory | How To Connect Configure Ad Ds Connector Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-configure-ad-ds-connector-account.md | -The PowerShell Module named [ADSyncConfig.psm1](reference-connect-adsyncconfig.md) was introduced with build 1.1.880.0 (released in August 2018) that includes a collection of cmdlets to help you configure the correct Active Directory permissions for your Microsoft Entra Connect deployment. +The PowerShell module named [`ADSyncConfig.psm1`](reference-connect-adsyncconfig.md) was introduced with build 1.1.880.0 (released in August 2018) that includes a collection of cmdlets to help you configure the correct Active Directory permissions for your Microsoft Entra Connect deployment. ## Overview The following PowerShell cmdlets can be used to setup Active Directory permissions of the AD DS Connector account, for each feature that you select to enable in Microsoft Entra Connect. To prevent any issues, you should prepare Active Directory permissions in advance whenever you want to install Microsoft Entra Connect using a custom domain account to connect to your forest. This ADSyncConfig module can also be used to configure permissions after Microsoft Entra Connect is deployed. The following table provides a summary of the permissions required on AD objects | Device writeback |Read and Write permissions to device objects and containers documented in [device writeback](how-to-connect-device-writeback.md). | | Group writeback |Read, Create, Update, and Delete group objects for synchronized **Office 365 groups**.| -## Using the ADSyncConfig PowerShell Module +## Using the ADSyncConfig PowerShell module + The ADSyncConfig module requires the [Remote Server Administration Tools (RSAT) for AD DS](/windows-server/remote/remote-server-administration-tools) since it depends on the AD DS PowerShell module and tools. To install RSAT for AD DS, open a Windows PowerShell window with ΓÇÿRun As AdministratorΓÇÖ and execute: ``` powershell |
active-directory | How To Connect Emergency Ad Fs Certificate Rotation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-emergency-ad-fs-certificate-rotation.md | You can also get the thumbprint by using AD FS Management. Go to **Service** > * ## Determine whether AD FS renews the certificates automatically By default, AD FS is configured to generate token signing and token decryption certificates automatically. It does so both during the initial configuration and when the certificates are approaching their expiration date. -You can run the following Windows PowerShell command: `PS C:\>Get-AdfsProperties | FL AutoCert*, Certificate*`. +You can run the following PowerShell command: `Get-AdfsProperties | FL AutoCert*, Certificate*`. The `AutoCertificateRollover` property describes whether AD FS is configured to renew token signing and token decrypting certificates automatically. Do either of the following: Now that you've added the first certificate, made it primary, and removed the ol ## Update Microsoft Entra ID with the new token-signing certificate -1. Open the Azure AD PowerShell Module for Windows PowerShell. Alternatively, open Windows PowerShell, and then run the `Import-Module msonline` command. +1. Open the Azure AD PowerShell module. Alternatively, open Windows PowerShell, and then run the `Import-Module msonline` command. 1. Connect to Microsoft Entra ID by running the following command: |
active-directory | How To Connect Fed Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-fed-management.md | It's easy to add a domain to be federated with Microsoft Entra ID by using Micro The following sections provide details about some of the common tasks that you might have to perform to customize your AD FS sign-in page. ## <a name="customlogo"></a>Add a custom company logo or illustration -To change the logo of the company that's displayed on the **Sign-in** page, use the following Windows PowerShell cmdlet and syntax. +To change the logo of the company that's displayed on the **Sign-in** page, use the following PowerShell cmdlet and syntax. > [!NOTE] > The recommended dimensions for the logo are 260 x 35 \@ 96 dpi with a file size no greater than 10 KB. Set-AdfsWebTheme -TargetName default -Logo @{path="c:\Contoso\logo.PNG"} > The *TargetName* parameter is required. The default theme that's released with AD FS is named Default. ## <a name="addsignindescription"></a>Add a sign-in description -To add a sign-in page description to the **Sign-in page**, use the following Windows PowerShell cmdlet and syntax. +To add a sign-in page description to the **Sign-in page**, use the following PowerShell cmdlet and syntax. ```azurepowershell-interactive Set-AdfsGlobalWebContent -SignInPageDescriptionText "<p>Sign-in to Contoso requires device registration. Select <A href='http://fs1.contoso.com/deviceregistration/'>here</A> for more information.</p>" |
active-directory | How To Connect Fed O365 Certs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-fed-o365-certs.md | On your AD FS server, open the MSOnline PowerShell prompt, and connect to Micros > [!NOTE] > MSOL-Cmdlets are part of the MSOnline PowerShell module.-> You can download the MSOnline PowerShell Module directly from the PowerShell Gallery. +> You can download the MSOnline PowerShell module directly from the PowerShell Gallery. > > Two certificates should be listed now, one of which has a **NotAfter** date of a ### Step 2: Update the new token signing certificates for the Microsoft 365 trust Update Microsoft 365 with the new token signing certificates to be used for the trust, as follows. -1. Open the Azure AD PowerShell Module for Windows PowerShell. +1. Open the Azure AD PowerShell module. 2. Run $cred=Get-Credential. When this cmdlet prompts you for credentials, type your cloud service administrator account credentials. 3. Run Connect-MsolService ΓÇôCredential $cred. This cmdlet connects you to the cloud service. Creating a context that connects you to the cloud service is required before running any of the additional cmdlets installed by the tool. 4. If you are running these commands on a computer that is not the AD FS primary federation server, run Set-MSOLAdfscontext -Computer <AD FS primary server>, where <AD FS primary server> is the internal FQDN name of the primary AD FS server. This cmdlet creates a context that connects you to AD FS. |
active-directory | How To Connect Fed Saml Idp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-fed-saml-idp.md | It is recommended that you always import the latest Microsoft Entra metadata whe <a name='add-azure-ad-as-a-relying-party'></a> ## Add Microsoft Entra ID as a relying party+ You must enable communication between your SAML 2.0 identity provider and Microsoft Entra ID. This configuration will be dependent on your specific identity provider and you should refer to documentation for it. You would typically set the relying party ID to the same as the entityID from the Microsoft Entra metadata. >[!NOTE] >Verify the clock on your SAML 2.0 identity provider server is synchronized to an accurate time source. An inaccurate clock time can cause federated logins to fail. -## Install Windows PowerShell for sign-on with SAML 2.0 identity provider -After you have configured your SAML 2.0 identity provider for use with Microsoft Entra sign-on, the next step is to download and install the Azure AD PowerShell Module for Windows PowerShell. Once installed, you will use these cmdlets to configure your Microsoft Entra domains as federated domains. +## Install PowerShell for sign-on with SAML 2.0 identity provider ++After you have configured your SAML 2.0 identity provider for use with Microsoft Entra sign-on, the next step is to download and install the Azure AD PowerShell module. Once installed, you will use these cmdlets to configure your Microsoft Entra domains as federated domains. -The Azure AD PowerShell Module for Windows PowerShell is a download for managing your organizations data in Microsoft Entra ID. This module installs a set of cmdlets to Windows PowerShell; you run those cmdlets to set up single sign-on access to Microsoft Entra ID and in turn to all of the cloud services you are subscribed to. For instructions about how to download and install the cmdlets, see [/previous-versions/azure/jj151815(v=azure.100)](/previous-versions/azure/jj151815(v=azure.100)) +The Azure AD PowerShell module is a download for managing your organizations data in Microsoft Entra ID. This module installs a set of cmdlets to PowerShell; you run those cmdlets to set up single sign-on access to Microsoft Entra ID and in turn to all of the cloud services you are subscribed to. For instructions about how to download and install the cmdlets, see [/previous-versions/azure/jj151815(v=azure.100)](/previous-versions/azure/jj151815(v=azure.100)) <a name='set-up-a-trust-between-your-saml-identity-provider-and-azure-ad'></a> ## Set up a trust between your SAML identity provider and Microsoft Entra ID-Before configuring federation on a Microsoft Entra domain, it must have a custom domain configured. You cannot federate the default domain that is provided by Microsoft. The default domain from Microsoft ends with ΓÇ£onmicrosoft.comΓÇ¥. -You will run a series of cmdlets in the Windows PowerShell command-line interface to add or convert domains for single sign-on. +Before configuring federation on a Microsoft Entra domain, it must have a custom domain configured. You cannot federate the default domain that is provided by Microsoft. The default domain from Microsoft ends with `onmicrosoft.com`. +You will run a series of PowerShell cmdlets to add or convert domains for single sign-on. Each Microsoft Entra domain that you want to federate using your SAML 2.0 identity provider must either be added as a single sign-on domain or converted to be a single sign-on domain from a standard domain. Adding or converting a domain sets up a trust between your SAML 2.0 identity provider and Microsoft Entra ID. Once federation has been configured you can switch back to ΓÇ£non-federatedΓÇ¥ ( <a name='provision-user-principals-to-azure-ad--microsoft-365'></a> ## Provision user principals to Microsoft Entra ID / Microsoft 365-Before you can authenticate your users to Microsoft 365, you must provision Microsoft Entra ID with user principals that correspond to the assertion in the SAML 2.0 claim. If these user principals are not known to Microsoft Entra ID in advance, then they cannot be used for federated sign-in. Either Microsoft Entra Connect or Windows PowerShell can be used to provision user principals. +Before you can authenticate your users to Microsoft 365, you must provision Microsoft Entra ID with user principals that correspond to the assertion in the SAML 2.0 claim. If these user principals are not known to Microsoft Entra ID in advance, then they cannot be used for federated sign-in. Either Microsoft Entra Connect or PowerShell can be used to provision user principals. Microsoft Entra Connect can be used to provision principals to your domains in your Microsoft Entra Directory from the on-premises Active Directory. For more detailed information, see [Integrate your on-premises directories with Microsoft Entra ID](../whatis-hybrid-identity.md). -Windows PowerShell can also be used to automate adding new users to Microsoft Entra ID and to synchronize changes from the on-premises directory. To use the Windows PowerShell cmdlets, you must download the [Azure AD PowerShell Module](/powershell/azure/active-directory/install-adv2). +PowerShell can also be used to automate adding new users to Microsoft Entra ID and to synchronize changes from the on-premises directory. To use the PowerShell cmdlets, you must download the [Azure Active Directory PowerShell module](/powershell/azure/active-directory/install-adv2). This procedure shows how to add a single user to Microsoft Entra ID. As the administrator, before you verify and manage single sign-on (also called i 1. You have reviewed the Microsoft Entra SAML 2.0 Protocol Requirements 2. You have configured your SAML 2.0 identity provider-3. Install Windows PowerShell for single sign-on with SAML 2.0 identity provider +3. Install PowerShell for single sign-on with SAML 2.0 identity provider 4. Set up a trust between SAML 2.0 identity provider and Microsoft Entra ID-5. Provisioned a known test user principal to Microsoft Entra ID (Microsoft 365) either through Windows PowerShell or Microsoft Entra Connect. +5. Provisioned a known test user principal to Microsoft Entra ID (Microsoft 365) via either PowerShell or Microsoft Entra Connect. 6. Configure directory synchronization using [Microsoft Entra Connect](../whatis-hybrid-identity.md). After setting up single sign-on with your SAML 2.0 SP-Lite based identity Provider, you should verify that it is working correctly. |
active-directory | How To Connect Import Export Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-import-export-config.md | To migrate the settings: 3. Run the script as shown here, and save the entire down-level server configuration directory. Copy this directory to the new staging server. You must copy the entire **Exported-ServerConfiguration-*** folder to the new server.- ![Screenshot that shows script in Windows PowerShell.](media/how-to-connect-import-export-config/migrate-2.png)![Screenshot that shows copying the Exported-ServerConfiguration-* folder.](media/how-to-connect-import-export-config/migrate-3.png) + ![Screenshot that shows script in PowerShell.](media/how-to-connect-import-export-config/migrate-2.png)![Screenshot that shows copying the Exported-ServerConfiguration-* folder.](media/how-to-connect-import-export-config/migrate-3.png) 4. Start **Microsoft Entra Connect** by double-clicking the icon on the desktop. Accept the Microsoft Software License Terms, and on the next page, select **Customize**. 5. Select the **Import synchronization settings** check box. Select **Browse** to browse the copied-over Exported-ServerConfiguration-* folder. Select the MigratedPolicy.json to import the migrated settings. |
active-directory | How To Connect Install Custom | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-install-custom.md | On the next page, you can select optional features for your scenario. >[!WARNING] >Microsoft Entra Connect versions 1.0.8641.0 and earlier rely on Azure Access Control Service for password writeback. This service was retired on November 7, 2018. If you use any of these versions of Microsoft Entra Connect and have enabled password writeback, users might lose the ability to change or reset their passwords when the service is retired. These versions of Microsoft Entra Connect don't support password writeback. >->For more information, see [Migrate from Azure Access Control Service](../../azuread-dev/active-directory-acs-migration.md). -> >If you want to use password writeback, download the [latest version of Microsoft Entra Connect](https://www.microsoft.com/download/details.aspx?id=47594). ![Screenshot showing the "Optional Features" page.](./media/how-to-connect-install-custom/optional2a.png) |
active-directory | How To Connect Install Multiple Domains | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-install-multiple-domains.md | Use the steps below to add an additional top-level domain. If you have already Use the following steps to remove the Microsoft Online trust and update your original domain. -1. On your AD FS federation server open **AD FS Management.** -2. On the left, expand **Trust Relationships** and **Relying Party Trusts** +1. On your AD FS federation server open **AD FS Management**. +2. On the left, expand **Trust Relationships** and **Relying Party Trusts**. 3. On the right, delete the **Microsoft Office 365 Identity Platform** entry. ![Remove Microsoft Online](./media/how-to-connect-install-multiple-domains/trust4.png)-4. On a machine that has [Azure AD PowerShell Module for Windows PowerShell](/previous-versions/azure/jj151815(v=azure.100)) installed on it run the following PowerShell: `$cred=Get-Credential`. +4. On a machine that has [Azure AD PowerShell module](/previous-versions/azure/jj151815(v=azure.100)) installed on it run the following PowerShell: `$cred=Get-Credential`. 5. Enter the username and password of a Hybrid Identity Administrator for the Microsoft Entra domain you are federating with.-6. In PowerShell, enter `Connect-MsolService -Credential $cred` -7. In PowerShell, enter `Update-MSOLFederatedDomain -DomainName <Federated Domain Name> -SupportMultipleDomain`. This update is for the original domain. So using the above domains it would be: `Update-MsolFederatedDomain -DomainName bmcontoso.com -SupportMultipleDomain` +6. In PowerShell, enter `Connect-MsolService -Credential $cred`. +7. In PowerShell, enter `Update-MSOLFederatedDomain -DomainName <Federated Domain Name> -SupportMultipleDomain`. This update is for the original domain. So using the above domains it would be: `Update-MsolFederatedDomain -DomainName bmcontoso.com -SupportMultipleDomain` Use the following steps to add the new top-level domain using PowerShell -1. On a machine that has [Azure AD PowerShell Module for Windows PowerShell](/previous-versions/azure/jj151815(v=azure.100)) installed on it run the following PowerShell: `$cred=Get-Credential`. -2. Enter the username and password of a Hybrid Identity Administratoristrator for the Microsoft Entra domain you are federating with +1. On a machine that has [Azure AD PowerShell module](/previous-versions/azure/jj151815(v=azure.100)) installed on it run the following PowerShell: `$cred=Get-Credential`. +2. Enter the username and password of a Hybrid Identity Administrator for the Microsoft Entra domain you are federating with 3. In PowerShell, enter `Connect-MsolService -Credential $cred` 4. In PowerShell, enter `New-MsolFederatedDomain ΓÇôSupportMultipleDomain ΓÇôDomainName` |
active-directory | How To Connect Modify Group Writeback | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-modify-group-writeback.md | To configure directory settings to disable automatic writeback of newly created ``` > [!NOTE] -> We recommend using Microsoft Graph PowerShell SDK with [Windows PowerShell 7](/powershell/scripting/whats-new/migrating-from-windows-powershell-51-to-powershell-7?view=powershell-7.3&preserve-view=true). +> We recommend using Microsoft Graph PowerShell SDK with [PowerShell 7](/powershell/scripting/whats-new/migrating-from-windows-powershell-51-to-powershell-7?view=powershell-7.3&preserve-view=true). - Microsoft Graph: Use the [directorySetting](/graph/api/resources/directorysetting?view=graph-rest-beta&preserve-view=true) resource type. |
active-directory | How To Connect Monitor Federation Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-monitor-federation-changes.md | Follow these steps to set up alerts to monitor the trust relationship: After the environment is configured, the data flows as follows: - 1. Microsoft Entra ID Logs get populated per the activity in the tenant. + 1. Microsoft Entra logs are populated per the activity in the tenant. 2. The log information flows to the Azure Log Analytics workspace. 3. A background job from Azure Monitor executes the log query based on the configuration of the Alert Rule in the configuration step (2) above. ``` After the environment is configured, the data flows as follows: ## Next steps -- [Integrate Microsoft Entra ID logs with Azure Monitor logs](../../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)+- [Integrate Microsoft Entra logs with Azure Monitor logs](../../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md) - [Create, view, and manage log alerts using Azure Monitor](../../../azure-monitor/alerts/alerts-create-new-alert-rule.md) - [Manage AD FS trust with Microsoft Entra ID using Microsoft Entra Connect](how-to-connect-azure-ad-trust.md) - [Best practices for securing Active Directory Federation Services](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs) |
active-directory | How To Connect Password Hash Synchronization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-password-hash-synchronization.md | The following section describes, in-depth, how password hash synchronization wor When synchronizing passwords, the plain-text version of your password is not exposed to the password hash synchronization feature, to Microsoft Entra ID, or any of the associated services. -User authentication takes place against Microsoft Entra rather than against the organization's own Active Directory instance. The SHA256 password data stored in Microsoft Entra ID--a hash of the original MD4 hash--is more secure than what is stored in Active Directory. Further, because this SHA256 hash cannot be decrypted, it cannot be brought back to the organization's Active Directory environment and presented as a valid user password in a pass-the-hash attack. +User authentication takes place against Microsoft Entra rather than against the organization's own Active Directory instance. The SHA256 password data stored in Microsoft Entra ID (a hash of the original MD4 hash) is more secure than what is stored in Active Directory. Further, because this SHA256 hash cannot be decrypted, it cannot be brought back to the organization's Active Directory environment and presented as a valid user password in a pass-the-hash attack. ### Password policy considerations If your organization uses the accountExpires attribute as part of user account m ### Overwrite synchronized passwords -An administrator can manually reset your password directly in Microsoft Entra ID by using Windows PowerShell (unless the user is in a Federated Domain). +An administrator can manually reset your password directly in Microsoft Entra ID by using PowerShell (unless the user is in a federated domain). In this case, the new password overrides your synchronized password, and all password policies defined in the cloud are applied to the new password. |
active-directory | How To Connect Pta Quick Start | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-pta-quick-start.md | First, you can do it interactively by just running the downloaded Authentication Second, you can create and run an unattended deployment script. This is useful when you want to deploy multiple Authentication Agents at once, or install Authentication Agents on Windows servers that don't have user interface enabled, or that you can't access with Remote Desktop. Here are the instructions on how to use this approach: 1. Run the following command to install an Authentication Agent: `AADConnectAuthAgentSetup.exe REGISTERCONNECTOR="false" /q`.-2. You can register the Authentication Agent with our service using Windows PowerShell. Create a PowerShell Credentials object `$cred` that contains a global administrator username and password for your tenant. Run the following command, replacing *\<username\>* and *\<password\>*: +2. You can register the Authentication Agent with our service via PowerShell. Create a PowerShell Credentials object `$cred` that contains a global administrator username and password for your tenant. Run the following command, replacing `<username>` and `<password>`: ```powershell $User = "<username>" |
active-directory | How To Connect Staged Rollout | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-staged-rollout.md | Enable *seamless SSO* by doing the following tasks: 5. Call `Get-AzureADSSOStatus | ConvertFrom-Json`. This command displays a list of Active Directory forests (see the "Domains" list) on which this feature has been enabled. By default, it's set to false at the tenant level. - ![Example of the Windows PowerShell output](./media/how-to-connect-staged-rollout/staged-3.png) + ![Example of the PowerShell output](./media/how-to-connect-staged-rollout/staged-3.png) 6. Call `$creds = Get-Credential`. At the prompt, enter the domain administrator credentials for the intended Active Directory forest. |
active-directory | How To Connect Sync Staging Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-sync-staging-server.md | We need to ensure that only one Sync Server is syncing changes at any given time > ![Screenshot shows Ready to Configure screen in the Active Microsoft Entra Connect dialog box.](media/how-to-connect-sync-staging-server/active-server-config.png) Since the server will be in staging mode, it will not write changes to Microsoft Entra ID, but retain any changes to the AD in its Connector Space, ready to write them. -It is recommended to leave the sync process on for the server in Staging Mode, so if it becomes active, it will quickly take over and won't have to do a large sync to catch up to the current state of the AD/Azure AD objects in scope. +It is recommended to leave the sync process on for the server in Staging Mode, so if it becomes active, it will quickly take over and won't have to do a large sync to catch up to the current state of the Active Directory / Microsoft Entra objects in scope. 5. After selecting to start the sync process and clicking Configure, the Microsoft Entra Connect server will be configured into Staging Mode. When this is completed, you will be prompted with a screen that confirms Staging Mode is enabled. |
active-directory | How To Connect Syncservice Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-syncservice-features.md | The synchronization feature of Microsoft Entra Connect has two components: * The on-premises component named **Microsoft Entra Connect Sync**, also called **sync engine**. * The service residing in Microsoft Entra ID also known as **Microsoft Entra Connect Sync service** -This topic explains how the following features of the **Microsoft Entra Connect Sync service** work and how you can configure them using Windows PowerShell. +This topic explains how the following features of the **Microsoft Entra Connect Sync service** work and how you can configure them using PowerShell. -These settings are configured by the [Azure AD PowerShell Module for Windows PowerShell](/previous-versions/azure/jj151815(v=azure.100)). Download and install it separately from Microsoft Entra Connect. The cmdlets documented in this topic were introduced in the [2016 March release (build 9031.1)](https://social.technet.microsoft.com/wiki/contents/articles/28552.microsoft-azure-active-directory-powershell-module-version-release-history.aspx#Version_9031_1). If you do not have the cmdlets documented in this topic or they do not produce the same result, then make sure you run the latest version. +These settings are configured by the [Azure AD PowerShell module](/previous-versions/azure/jj151815(v=azure.100)). Download and install it separately from Microsoft Entra Connect. The cmdlets documented in this topic were introduced in the [2016 March release (build 9031.1)](https://social.technet.microsoft.com/wiki/contents/articles/28552.microsoft-azure-active-directory-powershell-module-version-release-history.aspx#Version_9031_1). If you do not have the cmdlets documented in this topic or they do not produce the same result, then make sure you run the latest version. To see the configuration in your Microsoft Entra directory, run `Get-MsolDirSyncFeatures`. ![Get-MsolDirSyncFeatures result](./media/how-to-connect-syncservice-features/getmsoldirsyncfeatures.png) |
active-directory | Howto Troubleshoot Upn Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/howto-troubleshoot-upn-changes.md | Learn more: [How to use the Microsoft Authenticator app](https://support.microso Microsoft Authenticator app has four main functions: -* **multifactor authentication** with push notification or verification code +* **Multifactor authentication** with push notification or verification code * **Authentication broker** on iOS and Android devices fir SSO for applications using brokered authentication * [Enable cross-app SSO on Android using MSAL](../../develop/msal-android-single-sign-on.md) * **Device registration** or workplace join, to Microsoft Entra ID, which is a requirement for Intune App Protection and Device Enrolment/Management |
active-directory | Reference Connect Accounts Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-accounts-permissions.md | The following table is a summary of the custom settings wizard pages, the creden ### Create the AD DS Connector account > [!IMPORTANT]-> A new PowerShell Module named *ADSyncConfig.psm1* was introduced with build 1.1.880.0 (released in August 2018). The module includes a collection of cmdlets that help you configure the correct Windows Server AD permissions for the Microsoft Entra DS Connector account. +> A new PowerShell module named *ADSyncConfig.psm1* was introduced with build 1.1.880.0 (released in August 2018). The module includes a collection of cmdlets that help you configure the correct Windows Server AD permissions for the Microsoft Entra Domain Services Connector account. > > For more information, see [Microsoft Entra Connect: Configure AD DS Connector account permission](how-to-connect-configure-ad-ds-connector-account.md). |
active-directory | Reference Connect Adconnectivitytools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-adconnectivitytools.md | -The following documentation provides reference information for the ADConnectivityTools PowerShell Module that is included with Microsoft Entra Connect in `C:\Program Files\Microsoft Azure Active Directory Connect\Tools\ADConnectivityTool.psm1`. +The following documentation provides reference information for the `ADConnectivityTools` PowerShell module included with Microsoft Entra Connect in `C:\Program Files\Microsoft Azure Active Directory Connect\Tools\ADConnectivityTool.psm1`. ## Confirm-DnsConnectivity |
active-directory | Reference Connect Adsync | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-adsync.md | -# Microsoft Entra Connect: ADSync PowerShell Reference -The following documentation provides reference information for the ADSync.psm1 PowerShell Module that is included with Microsoft Entra Connect. +# Microsoft Entra Connect: ADSync PowerShell Reference +The following documentation provides reference information for the `ADSync.psm1` PowerShell module that is included with Microsoft Entra Connect. ## Add-ADSyncADDSConnectorAccount |
active-directory | Reference Connect Adsyncconfig | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-adsyncconfig.md | -The following documentation provides reference information for the ADSyncConfig.psm1 PowerShell Module that is included with Microsoft Entra Connect. +The following documentation provides reference information for the `ADSyncConfig.psm1` PowerShell module included with Microsoft Entra Connect. ## Get-ADSyncADConnectorAccount |
active-directory | Reference Connect Adsynctools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-adsynctools.md | -The following documentation provides reference information for the ADSyncTools.psm1 PowerShell Module that is included with Microsoft Entra Connect. +The following documentation provides reference information for the `ADSyncTools.psm1` PowerShell module included with Microsoft Entra Connect. -## Install the ADSyncTools PowerShell Module -To install the ADSyncTools PowerShell Module do the following: +## Install the ADSyncTools PowerShell module ++To install the ADSyncTools PowerShell module do the following: 1. Open Windows PowerShell with administrative privileges 2. Type or copy and paste the following: |
active-directory | Tshoot Connect Install Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tshoot-connect-install-issues.md | However, if you donΓÇÖt meet the express installation criteria and must do the c * [Custom installation of Microsoft Entra Connect](./how-to-connect-install-custom.md) * [Microsoft Entra Connect: Upgrade from a previous version to the latest](./how-to-upgrade-previous-version.md) * [Microsoft Entra Connect: What is staging server?](./plan-connect-topologies.md#staging-server)-* [What is the ADConnectivityTool PowerShell Module?](./how-to-connect-adconnectivitytools.md) +* [What is the `ADConnectivityTool` PowerShell module?](./how-to-connect-adconnectivitytools.md) ## Next steps - [Microsoft Entra Connect Sync](how-to-connect-sync-whatis.md). |
active-directory | Tshoot Connect Largeobjecterror Usercertificate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tshoot-connect-largeobjecterror-usercertificate.md | To obtain the list of objects in your tenant with LargeObject errors, use one of ## Mitigation options Until the LargeObject error is resolved, other attribute changes to the same object cannot be exported to Microsoft Entra ID. To resolve the error, you can consider the following options: - * Upgrade Azure AD Connect to build 1.1.524.0 or after. In Azure AD Connect build 1.1.524.0, the out-of-box synchronization rules have been updated to not export attributes userCertificate and userSMIMECertificate if the attributes have more than 15 values. For details on how to upgrade Azure AD Connect, refer to article [Microsoft Entra Connect: Upgrade from a previous version to the latest](./how-to-upgrade-previous-version.md). + * Upgrade Microsoft Entra Connect to build 1.1.524.0 or after. In Microsoft Entra Connect build 1.1.524.0, the out-of-box synchronization rules have been updated to not export attributes userCertificate and userSMIMECertificate if the attributes have more than 15 values. For details on how to upgrade Microsoft Entra Connect, refer to article [Microsoft Entra Connect: Upgrade from a previous version to the latest](./how-to-upgrade-previous-version.md). * Implement an **outbound sync rule** in Microsoft Entra Connect that exports a **null value instead of the actual values for objects with more than 15 certificate values**. This option is suitable if you do not require any of the certificate values to be exported to Microsoft Entra ID for objects with more than 15 values. For details on how to implement this sync rule, refer to next section [Implementing sync rule to limit export of userCertificate attribute](#implementing-sync-rule-to-limit-export-of-usercertificate-attribute). Ensure no synchronization takes place while you are in the middle of implementin 2. Disable scheduled synchronization by running cmdlet: `Set-ADSyncScheduler -SyncCycleEnabled $false` > [!Note]-> The preceding steps are only applicable to newer versions (1.1.xxx.x) of Azure AD Connect with the built-in scheduler. If you are using older versions (1.0.xxx.x) of Azure AD Connect that uses Windows Task Scheduler, or you are using your own custom scheduler (not common) to trigger periodic synchronization, you need to disable them accordingly. +> The preceding steps are only applicable to newer versions (1.1.xxx.x) of Microsoft Entra Connect with the built-in scheduler. If you are using older versions (1.0.xxx.x) of Microsoft Entra Connect that uses Windows Task Scheduler, or you are using your own custom scheduler (not common) to trigger periodic synchronization, you need to disable them accordingly. 1. Start the **Synchronization Service Manager** by going to START → Synchronization Service. Now that the issue is resolved, re-enable the built-in sync scheduler: 2. Re-enable scheduled synchronization by running cmdlet: `Set-ADSyncScheduler -SyncCycleEnabled $true` > [!Note]-> The preceding steps are only applicable to newer versions (1.1.xxx.x) of Azure AD Connect with the built-in scheduler. If you are using older versions (1.0.xxx.x) of Azure AD Connect that uses Windows Task Scheduler, or you are using your own custom scheduler (not common) to trigger periodic synchronization, you need to disable them accordingly. +> The preceding steps are only applicable to newer versions (1.1.xxx.x) of Microsoft Entra Connect with the built-in scheduler. If you are using older versions (1.0.xxx.x) of Microsoft Entra Connect that uses Windows Task Scheduler, or you are using your own custom scheduler (not common) to trigger periodic synchronization, you need to disable them accordingly. ## Next steps Learn more about [Integrating your on-premises identities with Microsoft Entra ID](../whatis-hybrid-identity.md). |
active-directory | Tshoot Connect Sso | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tshoot-connect-sso.md | This article helps you find troubleshooting information about common problems re ## Check status of feature -Ensure that the Seamless SSO feature is still **Enabled** on your tenant. You can check the status by going to the **Identity** > **Hybrid management** > **Azure AD Connect** > **Connect Sync** pane in the [[Microsoft Entra admin center](https://entra.microsoft.com)](https://portal.azure.com/). +Ensure that the Seamless SSO feature is still **Enabled** on your tenant. You can check the status by going to the **Identity** > **Hybrid management** > **Microsoft Entra Connect** > **Connect Sync** pane in the [[Microsoft Entra admin center](https://entra.microsoft.com)](https://portal.azure.com/). ![Screenshot of the Microsoft Entra admin center: Microsoft Entra Connect pane.](./media/tshoot-connect-sso/sso10.png) |
active-directory | Tshoot Connect Tshoot Sql Connectivity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tshoot-connect-tshoot-sql-connectivity.md | Import-module -Name "C:\Program Files\Microsoft Azure Active Directory Connect\T >[!NOTE] >Install-Module requires updating to [PowerShell 5.0 (WMF 5.0)](https://www.microsoft.com/download/details.aspx?id=50395) or later; -Or install [PackageManagement PowerShell Modules Preview - March 2016 for PowerShell 3.0/4.0](/powershell/module/PackageManagement) +Or install [PackageManagement PowerShell module preview - March 2016 for PowerShell 3.0/4.0](/powershell/module/PackageManagement) - **Show all commands**: `Get-Command -Module AdSyncTools` - **Execute the PowerShell function**: `Connect-ADSyncDatabase` with the following parameters |
active-directory | Decommission Connect Sync V1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/decommission-connect-sync-v1.md | Last updated 05/31/2023 + - # Decommission Azure AD Connect V1 The one-year advanced notice of Azure AD Connect V1's retirement was announced in August 2021. As of August 31, 2022, all V1 versions went out of support and were subject to stop working unexpectedly at any point. |
active-directory | Howto Export Risk Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-export-risk-data.md | AADRiskyUsers ## Storage account -By routing logs to an Azure storage account, you can keep it for longer than the default retention period. For more information, see the article [Tutorial: Archive Microsoft Entra ID logs to an Azure storage account](../reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md). +By routing logs to an Azure storage account, you can keep it for longer than the default retention period. For more information, see the article [Tutorial: Archive Microsoft Entra logs to an Azure storage account](../reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md). ## Azure Event Hubs -Azure Event Hubs can look at incoming data from sources like Microsoft Entra ID Protection and provide real-time analysis and correlation. For more information, see the article [Tutorial: Stream Microsoft Entra ID logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) +Azure Event Hubs can look at incoming data from sources like Microsoft Entra ID Protection and provide real-time analysis and correlation. For more information, see the article [Tutorial: Stream Microsoft Entra logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) ## Other options Organizations can use the [Microsoft Graph API to programmatically interact with ## Next steps -- [What is Microsoft Entra ID monitoring?](../reports-monitoring/overview-monitoring.md)+- [What is Microsoft Entra monitoring?](../reports-monitoring/overview-monitoring-health.md) - [Install and use the log analytics views for Microsoft Entra ID](../../azure-monitor/visualize/workbooks-view-designer-conversion-overview.md) - [Connect data from Microsoft Entra ID Protection](../../sentinel/data-connectors/azure-active-directory-identity-protection.md) - [Microsoft Entra ID Protection and the Microsoft Graph PowerShell SDK](howto-identity-protection-graph-api.md)-- [Tutorial: Stream Microsoft Entra ID logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md)+- [Tutorial: Stream Microsoft Entra logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) |
active-directory | Add Application Portal Assign Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-assign-users.md | To create a user account in your Microsoft Entra tenant: To assign a user account to an enterprise application: 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).-1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**. For example, the application that you created in the previous quickstart named **Azure AD SAML toolkit 1**. +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**. For example, the application that you created in the previous quickstart named **Azure AD SAML Toolkit 1**. 1. In the left pane, select **Users and groups**, and then select **Add user/group**. :::image type="content" source="media/add-application-portal-assign-users/assign-user.png" alt-text="Assign user account to an application in your Microsoft Entra tenant."::: |
active-directory | Add Application Portal Setup Sso | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-setup-sso.md | -Microsoft Entra ID has a gallery that contains thousands of pre-integrated applications that use SSO. This article uses an enterprise application named **Azure AD SAML toolkit 1** as an example, but the concepts apply for most pre-configured enterprise applications in the gallery. +Microsoft Entra ID has a gallery that contains thousands of pre-integrated applications that use SSO. This article uses an enterprise application named **Azure AD SAML Toolkit 1** as an example, but the concepts apply for most pre-configured enterprise applications in the gallery. It is recommended that you use a non-production environment to test the steps in this article. To enable SSO for an application: 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). 1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**. -1. Enter the name of the existing application in the search box, and then select the application from the search results. For example, **Azure AD SAML toolkit 1**. +1. Enter the name of the existing application in the search box, and then select the application from the search results. For example, **Azure AD SAML Toolkit 1**. 1. In the **Manage** section of the left menu, select **Single sign-on** to open the **Single sign-on** pane for editing. 1. Select **SAML** to open the SSO configuration page. After the application is configured, users can sign in to it by using their credentials from the Microsoft Entra tenant.-1. The process of configuring an application to use Microsoft Entra ID for SAML-based SSO varies depending on the application. For any of the enterprise applications in the gallery, use the **configuration guide** link to find information about the steps needed to configure the application. The steps for the **Azure AD SAML toolkit 1** are listed in this article. +1. The process of configuring an application to use Microsoft Entra ID for SAML-based SSO varies depending on the application. For any of the enterprise applications in the gallery, use the **configuration guide** link to find information about the steps needed to configure the application. The steps for the **Azure AD SAML Toolkit 1** are listed in this article. :::image type="content" source="media/add-application-portal-setup-sso/saml-configuration.png" alt-text="Configure single sign-on for an enterprise application."::: -1. In the **Set up Azure AD SAML toolkit 1** section, record the values of the **Login URL**, **Microsoft Entra Identifier**, and **Logout URL** properties to be used later. +1. In the **Set up Azure AD SAML Toolkit 1** section, record the values of the **Login URL**, **Microsoft Entra Identifier**, and **Logout URL** properties to be used later. ## Configure single sign-on in the tenant Using single sign-on in the application requires you to register the user accoun To register a user account with the application: -1. Open a new browser window and browse to the sign-in URL for the application. For the **Azure AD SAML toolkit** application, the address is `https://samltoolkit.azurewebsites.net`. +1. Open a new browser window and browse to the sign-in URL for the application. For the **Azure AD SAML Toolkit** application, the address is `https://samltoolkit.azurewebsites.net`. 1. Select **Register** in the upper right corner of the page. - :::image type="content" source="media/add-application-portal-setup-sso/toolkit-register.png" alt-text="Register a user account in the Azure AD SAML toolkit application."::: + :::image type="content" source="media/add-application-portal-setup-sso/toolkit-register.png" alt-text="Register a user account in the Azure AD SAML Toolkit application."::: 1. For **Email**, enter the email address of the user that will access the application. Ensure that the user account is already assigned to the application. 1. Enter a **Password** and confirm it. You can test the single sign-on configuration from the **Set up single sign-on** To test SSO: -1. In the **Test single sign-on with Azure AD SAML toolkit 1** section, on the **Set up single sign-on with SAML** pane, select **Test**. +1. In the **Test single sign-on with Azure AD SAML Toolkit 1** section, on the **Set up single sign-on with SAML** pane, select **Test**. 1. Sign in to the application using the Microsoft Entra credentials of the user account that you assigned to the application. |
active-directory | Add Application Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal.md | -In this quickstart, you use the Microsoft Entra admin center to add an enterprise application to your Microsoft Entra tenant. Microsoft Entra ID has a gallery that contains thousands of enterprise applications that have been preintegrated. Many of the applications your organization uses are probably already in the gallery. This quickstart uses the application named **Azure AD SAML toolkit** as an example, but the concepts apply for most [enterprise applications in the gallery](../saas-apps/tutorial-list.md). +In this quickstart, you use the Microsoft Entra admin center to add an enterprise application to your Microsoft Entra tenant. Microsoft Entra ID has a gallery that contains thousands of enterprise applications that have been preintegrated. Many of the applications your organization uses are probably already in the gallery. This quickstart uses the application named **Azure AD SAML Toolkit** as an example, but the concepts apply for most [enterprise applications in the gallery](../saas-apps/tutorial-list.md). It's recommended that you use a nonproduction environment to test the steps in this quickstart. To add an enterprise application to your tenant: 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). 1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**. 1. Select **New application**.-1. The **Browse Microsoft Entra Gallery** pane opens and displays tiles for cloud platforms, on-premises applications, and featured applications. Applications listed in the **Featured applications** section have icons indicating whether they support federated single sign-on (SSO) and provisioning. Search for and select the application. In this quickstart, **Azure AD SAML toolkit* is being used. +1. The **Browse Microsoft Entra Gallery** pane opens and displays tiles for cloud platforms, on-premises applications, and featured applications. Applications listed in the **Featured applications** section have icons indicating whether they support federated single sign-on (SSO) and provisioning. Search for and select the application. In this quickstart, **Azure AD SAML Toolkit* is being used. :::image type="content" source="media/add-application-portal/browse-gallery.png" alt-text="Browse in the enterprise application gallery for the application that you want to add."::: |
active-directory | Application Sign In Unexpected User Consent Error | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-sign-in-unexpected-user-consent-error.md | End-users won't be able to grant consent to apps that have been detected as risk ## Next steps -[Apps, permissions, and consent in Azure Active Directory (v1 endpoint)](../develop/quickstart-register-app.md)<br> +[Apps, permissions, and consent in Azure Active Directory (v1.0 endpoint)](../develop/quickstart-register-app.md)<br> -[Scopes, permissions, and consent in the Microsoft Entra ID (v2.0 endpoint)](../develop/permissions-consent-overview.md) +[Scopes, permissions, and consent in the Microsoft identity platform (v2.0 endpoint)](../develop/permissions-consent-overview.md) [Unexpected consent prompt when signing in to an application](application-sign-in-unexpected-user-consent-prompt.md) |
active-directory | Delete Application Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/delete-application-portal.md | To delete an enterprise application, you need: 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). 1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**.-1. Enter the name of the existing application in the search box, and then select the application from the search results. In this article, we use the **Azure AD SAML toolkit 1** as an example. +1. Enter the name of the existing application in the search box, and then select the application from the search results. In this article, we use the **Azure AD SAML Toolkit 1** as an example. 1. In the **Manage** section of the left menu, select **Properties**. 1. At the top of the **Properties** pane, select **Delete**, and then select **Yes** to confirm you want to delete the application from your Microsoft Entra tenant. |
active-directory | F5 Big Ip Kerberos Easy Button | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-kerberos-easy-button.md | Initiate the APM Guided Configuration to launch the Easy Button template. 1. Navigate to **Access > Guided Configuration > Microsoft Integration** and select **Azure AD Application**. - ![Screenshot of the Microsoft Entra Application option on Guided Configuration.](./media/f5-big-ip-easy-button-ldap/easy-button-template.png) + ![Screenshot of the Azure A D Application option on Guided Configuration.](./media/f5-big-ip-easy-button-ldap/easy-button-template.png) 2. Review the configuration steps and select **Next** The BIG-IP does not support group Managed Service Accounts (gMSA), therefore cre 1. Enter the following PowerShell command. Replace the **UserPrincipalName** and **SamAccountName** values with your environment values. For better security, use a dedicated SPN that matches the host header of the application. - ```New-ADUser -Name "F5 BIG-IP Delegation Account" UserPrincipalName $HOST_SPN SamAccountName "f5-big-ip" -PasswordNeverExpires $true Enabled $true -AccountPassword (Read-Host -AsSecureString "Account Password") ``` + `New-ADUser -Name "F5 BIG-IP Delegation Account" UserPrincipalName $HOST_SPN SamAccountName "f5-big-ip" -PasswordNeverExpires $true Enabled $true -AccountPassword (Read-Host -AsSecureString "Account Password")` HOST_SPN = host/f5-big-ip.contoso.com@contoso.com The BIG-IP does not support group Managed Service Accounts (gMSA), therefore cre 2. Create a **Service Principal Name (SPN)** for the APM service account to use during delegation to the web application service account: - ```Set-AdUser -Identity f5-big-ip -ServicePrincipalNames @Add="host/f5-big-ip.contoso.com"} ``` + `Set-AdUser -Identity f5-big-ip -ServicePrincipalNames @{ Add="host/f5-big-ip.contoso.com" }` >[!NOTE] >It is mandatory to include the host/ part in the format of UserPrincipleName (host/name.domain@domain) or ServicePrincipleName (host/name.domain). The BIG-IP does not support group Managed Service Accounts (gMSA), therefore cre * Confirm your web application is running in the computer context or a dedicated service account. * For the Computer context, use the following command to query the account object in the Active Directory to see its defined SPNs. Replace <name_of_account> with the account for your environment. - ```Get-ADComputer -identity <name_of_account> -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames ``` + `Get-ADComputer -identity <name_of_account> -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames` For example: Get-ADUser -identity f5-big-ip -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames * For the dedicated service account, use the following command to query the account object in Active Directory to see its defined SPNs. Replace <name_of_account> with the account for your environment. - ```Get-ADUser -identity <name_of_account> -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames ``` + `Get-ADUser -identity <name_of_account> -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames` For example:- Get-ADComputer -identity f5-big-ip -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames ++ `Get-ADComputer -identity f5-big-ip -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames` 4. If the application ran in the machine context, add the SPN to the object of the computer account in Active Directory: - ```Set-ADComputer -Identity APP-VM-01 -ServicePrincipalNames @{Add="http/myexpenses.contoso.com"} ``` + `Set-ADComputer -Identity APP-VM-01 -ServicePrincipalNames @{ Add="http/myexpenses.contoso.com" }` With SPNs defined, establish trust for the APM service account delegate to that service. The configuration varies depending on the topology of your BIG-IP instance and application server. With SPNs defined, establish trust for the APM service account delegate to that 1. Set trust for the APM service account to delegate authentication: - ```Get-ADUser -Identity f5-big-ip | Set-ADAccountControl -TrustedToAuthForDelegation $true ``` + `Get-ADUser -Identity f5-big-ip | Set-ADAccountControl -TrustedToAuthForDelegation $true` 2. The APM service account needs to know the target SPN it's trusted to delegate to. Set the target SPN to the service account running your web application: - ```Set-ADUser -Identity f5-big-ip -Add @{'msDS-AllowedToDelegateTo'=@('HTTP/myexpenses.contoso.com')} ``` + `Set-ADUser -Identity f5-big-ip -Add @{ 'msDS-AllowedToDelegateTo'=@('HTTP/myexpenses.contoso.com') }` >[!NOTE] >You can complete these tasks with the Active Directory Users and Computers, Microsoft Management Console (MMC) snap-in, on a domain controller. In the Windows Server 2012 version, and higher, cross-domain KCD uses Resource-B You can use the PrincipalsAllowedToDelegateToAccount property of the application service account (computer or dedicated service account) to grant delegation from BIG-IP. For this scenario, use the following PowerShell command on a domain controller (Windows Server 2012 R2, or later) in the same domain as the application. -Use an SPN defined against a web application service account. For better security, use a dedicated SPN that matches the host header of the application. For example, because the web application host header in this example is myexpenses.contoso.com, add HTTP/myexpenses.contoso.com to the application service account object in Active Directory (AD): +Use an SPN defined against a web application service account. For better security, use a dedicated SPN that matches the host header of the application. For example, because the web application host header in this example is `myexpenses.contoso.com`, add `HTTP/myexpenses.contoso.com` to the application service account object in Active Directory (AD): -```Set-AdUser -Identity web_svc_account -ServicePrincipalNames @{Add="http/myexpenses.contoso.com"} ``` +`Set-AdUser -Identity web_svc_account -ServicePrincipalNames @{ Add="http/myexpenses.contoso.com" }` For the following commands, note the context. If the web_svc_account service runs in the context of a user account, use these commands: -```$big-ip= Get-ADComputer -Identity f5-big-ip -server dc.contoso.com ``` -```Set-ADUser -Identity web_svc_account -PrincipalsAllowedToDelegateToAccount ``` -```$big-ip Get-ADUser web_svc_account -Properties PrincipalsAllowedToDelegateToAccount ``` +`$big-ip= Get-ADComputer -Identity f5-big-ip -server dc.contoso.com` ++``Set-ADUser -Identity web_svc_account -PrincipalsAllowedToDelegateToAccount` ++`$big-ip Get-ADUser web_svc_account -Properties PrincipalsAllowedToDelegateToAccount` If the web_svc_account service runs in the context of a computer account, use these commands: -```$big-ip= Get-ADComputer -Identity f5-big-ip -server dc.contoso.com ``` -```Set-ADComputer -Identity web_svc_account -PrincipalsAllowedToDelegateToAccount ``` -```$big-ip Get-ADComputer web_svc_account -Properties PrincipalsAllowedToDelegateToAccount ``` +`$big-ip= Get-ADComputer -Identity f5-big-ip -server dc.contoso.com` ++`Set-ADComputer -Identity web_svc_account -PrincipalsAllowedToDelegateToAccount` ++`$big-ip Get-ADComputer web_svc_account -Properties PrincipalsAllowedToDelegateToAccount` For more information, see [Kerberos Constrained Delegation across domains](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh831477(v=ws.11)). |
active-directory | F5 Big Ip Ldap Header Easybutton | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-ldap-header-easybutton.md | Initiate the APM **Guided Configuration** to launch the **Easy Button** template 1. Navigate to **Access > Guided Configuration > Microsoft Integration** and select **Azure AD Application**. - ![Screenshot of the Microsoft Entra Application option on Guided Configuration.](./media/f5-big-ip-easy-button-ldap/easy-button-template.png) + ![Screenshot of the Azure A D Application option on Guided Configuration.](./media/f5-big-ip-easy-button-ldap/easy-button-template.png) 2. Review the list of steps and select **Next** This section contains properties to manually configure a new BIG-IP SAML applica For this scenario, select **F5 BIG-IP APM Azure AD Integration > Add**. - ![Screenshot of the Add option under Configuration Properties on Azure Configuration.](./media/f5-big-ip-easy-button-ldap/azure-config-add-app.png) #### Azure Configuration |
active-directory | F5 Passwordless Vpn | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-passwordless-vpn.md | Set up a SAML federation trust between the BIG-IP to allow the Microsoft Entra B 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). 2. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**, then select **New application**.-3. In the gallery, search for F5 and select **F5 BIG-IP APM Azure AD integration**. +3. In the gallery, search for *F5* and select **F5 BIG-IP APM Azure AD integration**. 4. Enter a name for the application. 5. Select **Add** then **Create**. 6. The name, as an icon, appears in the Microsoft Entra admin center and Office 365 portal. |
active-directory | Migrate Adfs Saml Based Sso | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-adfs-saml-based-sso.md | SaaS apps need to know where to send authentication requests and how to validate | **IdP sign-out URL**<p>Sign-out URL of the IdP from the app's perspective (where the user is redirected when they choose to sign out of the app).| The sign-out URL is either the same as the sign-on URL, or the same URL with "wa=wsignout1.0" appended. For example: `https://fs.contoso.com/adfs/ls/?wa=wsignout1.0`| Replace {tenant-id} with your tenant ID.<p>For apps that use the SAML-P protocol:<p>[https://login.microsoftonline.com/{tenant-id}/saml2](https://login.microsoftonline.com/{tenant-id}/saml2) <p> ΓÇÄFor apps that use the WS-Federation protocol: [https://login.microsoftonline.com/common/wsfederation?wa=wsignout1.0](https://login.microsoftonline.com/common/wsfederation?wa=wsignout1.0) | | **Token signing certificate**<p>The IdP uses the private key of the certificate to sign issued tokens. It verifies that the token came from the same IdP that the app is configured to trust.| Find the AD FS token signing certificate in AD FS Management under **Certificates**.| Find it in the Microsoft Entra admin center in the application's **Single sign-on properties** under the header **SAML Signing Certificate**. There, you can download the certificate for upload to the app. <p>ΓÇÄIf the application has more than one certificate, you can find all certificates in the federation metadata XML file. | | **Identifier/ "issuer"**<p>Identifier of the IdP from the app's perspective (sometimes called the "issuer ID").<p>ΓÇÄIn the SAML token, the value appears as the Issuer element.| The identifier for AD FS is usually the federation service identifier in AD FS Management under **Service > Edit Federation Service Properties**. For example: `http://fs.contoso.com/adfs/services/trust`| Replace {tenant-id} with your tenant ID.<p>https:\//sts.windows.net/{tenant-id}/ |-| **IdP federation metadata**<p>Location of the IdP's publicly available federation metadata. (Some apps use federation metadata as an alternative to the administrator configuring URLs, identifier, and token signing certificate individually.)| Find the AD FS federation metadata URL in AD FS Management under **Service > Endpoints > Metadata > Type: Federation Metadata**. For example: `https://fs.contoso.com/FederationMetadat). | +| **IdP federation metadata**<p>Location of the IdP's publicly available federation metadata. (Some apps use federation metadata as an alternative to the administrator configuring URLs, identifier, and token signing certificate individually.)| Find the AD FS federation metadata URL in AD FS Management under **Service > Endpoints > Metadata > Type: Federation Metadata**. For example: `https://fs.contoso.com/FederationMetadat). | ## Next steps |
active-directory | Migrate Okta Sign On Policies Conditional Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-okta-sign-on-policies-conditional-access.md | Before you convert to Conditional Access, confirm the base MFA tenant settings f ![Screenshot of the multifactor authentication screen.](media/migrate-okta-sign-on-policies-conditional-access/legacy-portal.png) -5. Confirm there are no users enabled for legacy MFA: On the **multifactor authentication** menu, on **multifactor authentication status**, select **Enabled** and **Enforced**. If the tenant has users in the following views, disable them in the legacy menu. +5. Confirm there are no users enabled for legacy MFA: On the **Multifactor authentication** menu, on **Multifactor authentication status**, select **Enabled** and **Enforced**. If the tenant has users in the following views, disable them in the legacy menu. ![Screenshot of the multifactor authentication screen with the search feature highlighted.](media/migrate-okta-sign-on-policies-conditional-access/disable-user-portal.png) |
active-directory | Migrate Okta Sync Provisioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-okta-sync-provisioning.md | You can connect to Microsoft Graph PowerShell and examine the current ImmutableI `Install-Module AzureAD` in an administrative session before you run the following commands: ```Powershell-Import-module AzureAD +Import-Module AzureAD Connect-MgGraph ``` After you prepare your list of source and destination targets, install a Microso 1. Download and install Microsoft Entra Connect on a server. See, [Custom installation of Microsoft Entra Connect](../hybrid/connect/how-to-connect-install-custom.md). 2. In the left panel, select **Identifying users**.-3. On the **Uniquely identifying your users** page, under **Select how users should be identified with Azure AD**, select **Choose a specific attribute**. +3. On the **Uniquely identifying your users** page, under **Select how users should be identified with Microsoft Entra ID**, select **Choose a specific attribute**. 4. If you haven't modified the Okta default, select **mS-DS-ConsistencyGUID**. >[!WARNING] After you disable Okta provisioning, the Microsoft Entra cloud sync agent can sy ## Next steps - [Tutorial: Migrate your applications from Okta to Microsoft Entra ID](migrate-applications-from-okta.md)-- [Tutorial: Migrate Okta federation to Microsoft Entra managed authentication](migrate-okta-federation.md)+- [Tutorial: Migrate Okta federation to Microsoft Entra ID managed authentication](migrate-okta-federation.md) - [Tutorial: Migrate Okta sign-on policies to Microsoft Entra Conditional Access](./migrate-okta-sign-on-policies-conditional-access.md) |
active-directory | Silverfort Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/silverfort-integration.md | +<!-- docutune:ignore "Azure A ?D" --> + In this tutorial, learn how to integrate your on-premises Silverfort implementation with Microsoft Entra ID. Learn more: [Microsoft Entra hybrid joined devices](../devices/concept-hybrid-join.md). Set up Silverfort Azure AD Adapter in your Microsoft Entra tenant: 4. Select **Save Changes**. 5. On the **Permissions requested** dialog, select **Accept**. - ![image shows Microsoft Entra bridge connector](./media/silverfort-integration/bridge-connector.png) + ![image shows Azure A D bridge connector](./media/silverfort-integration/bridge-connector.png) ![image shows registration confirmation](./media/silverfort-integration/grant-permission.png) Set up Silverfort Azure AD Adapter in your Microsoft Entra tenant: 7. On the **Settings** page, select **Save Changes**. - ![image shows the Azure AD Adapter](./media/silverfort-integration/silverfort-adapter.png) + ![image shows the Azure A D Adapter](./media/silverfort-integration/silverfort-adapter.png) 8. Sign in to your Microsoft Entra account. In the left pane, select **Enterprise applications**. The **Silverfort Azure AD Adapter** application appears as registered. Set up Silverfort Azure AD Adapter in your Microsoft Entra tenant: 17. For Action, select **Azure AD BRIDGE**. - ![image shows save Azure AD bridge](./media/silverfort-integration/save-bridge.png) + ![image shows save Azure A D bridge](./media/silverfort-integration/save-bridge.png) 18. Select **Save**. You're prompted to turn on the policy. |
active-directory | V2 Howto App Gallery Listing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/v2-howto-app-gallery-listing.md | To publish your application in the Microsoft Entra application gallery, you need To publish your application in the gallery, you must first read and agree to specific [terms and conditions](https://azure.microsoft.com/support/legal/active-directory-app-gallery-terms/). - Implement support for *single sign-on* (SSO). To learn more about supported options, see [Plan a single sign-on deployment](plan-sso-deployment.md). - For password SSO, make sure that your application supports form authentication so that password vaulting can be used.- - For federated applications (OpenID and SAML/WS-Fed), the application must support the [software-as-a-service (SaaS) model](https://azure.microsoft.com/overview/what-is-saas/). Enterprise gallery applications must support multiple user configurations and not any specific user. - - For federated applications (OpenID and SAML/WS-Fed), the application can be single **or** multitenanted - - For OpenID Connect, if the application is multitenanted the [Microsoft Entra consent framework](../develop/application-consent-experience.md) must be correctly implemented. -- Provisioning is optional yet highly recommended. To learn more about Microsoft Entra SCIM, see [build a SCIM endpoint and configure user provisioning with Microsoft Entra ID](../app-provisioning/use-scim-to-provision-users-and-groups.md).+ - For federated applications (SAML/WS-Fed), the application should preferably support [software-as-a-service (SaaS) model](https://azure.microsoft.com/overview/what-is-saas/) but it is not mandatory and it can be an on-premises application as well. Enterprise gallery applications must support multiple user configurations and not any specific user. ++ - For OpenID Connect, the application should be multitenant and [Microsoft Entra ID consent framework](../develop/application-consent-experience.md) must be correctly implemented. Refer to [this](../develop/howto-convert-app-to-be-multi-tenant.md) link to convert the application into multitenant. +- Provisioning is optional yet highly recommended. To learn more about Microsoft Entra SCIM, see [build a SCIM endpoint and configure user provisioning with Azure AD](../app-provisioning/use-scim-to-provision-users-and-groups.md). You can sign up for a free, test Development account. It's free for 90 days and you get all of the premium Microsoft Entra features with it. You can also extend the account if you use it for development work: [Join the Microsoft 365 Developer Program](/office/developer-program/microsoft-365-developer-program). Create documentation that includes the following information at minimum: ### App documentation on the Microsoft site -When your application is added to the gallery, documentation is created that explains the step-by-step process. For an example, see [Tutorials for integrating SaaS applications with Microsoft Entra ID](../saas-apps/tutorial-list.md). This documentation is created based on your submission to the gallery. You can easily update the documentation if you make changes to your application by using your GitHub account. +When your SAML application is added to the gallery, documentation is created that explains the step-by-step process. For an example, see [Tutorials for integrating SaaS applications with Microsoft Entra ID](../saas-apps/tutorial-list.md). This documentation is created based on your submission to the gallery. You can easily update the documentation if you make changes to your application by using your GitHub account. ++For OIDC application, there is no application specific documentation, we have only the generic [tutorial](../develop/v2-protocols-oidc.md) for all the OpenID Connect applications. ## Submit your application You can track application requests by customer name at the Microsoft Application ### Timelines -Listing an **SAML 2.0 or WS-Fed application** in the gallery takes 7 to 10 business days. +Listing an **SAML 2.0 or WS-Fed application** in the gallery takes 12 to 15 business days. :::image type="content" source="./media/howto-app-gallery-listing/timeline.png" alt-text="Screenshot that shows the timeline for listing a SAML application."::: -Listing an **OpenID Connect application** in the gallery takes 2 to 5 business days. +Listing an **OpenID Connect application** in the gallery takes 7 to 10 business days. Listing an **SCIM provisioning application** in the gallery varies, depending on numerous factors. |
active-directory | Msi Tutorial Linux Vm Access Arm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/msi-tutorial-linux-vm-access-arm.md | To complete these steps, you need an SSH client. If you are using Windows, you c ``` > [!NOTE]- > The value of the `resource` parameter must be an exact match for what is expected by Azure AD. When using the Resource Manager resource ID, you must include the trailing slash on the URI.  + > The value of the `resource` parameter must be an exact match for what is expected by Microsoft Entra ID. When using the Resource Manager resource ID, you must include the trailing slash on the URI.  The response includes the access token you need to access Azure Resource Manager.  |
active-directory | Tutorial Linux Vm Access Arm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-arm.md | When you use managed identities for Azure resources, your code can get access to To complete these steps, you need an SSH client. If you're using Windows, you can use the SSH client in the [Windows Subsystem for Linux](/windows/wsl/about). If you need assistance configuring your SSH client's keys, see [How to Use SSH keys with Windows on Azure](../../virtual-machines/linux/ssh-from-windows.md), or [How to create and use an SSH public and private key pair for Linux VMs in Azure](../../virtual-machines/linux/mac-create-ssh-keys.md). -1. In the portal, navigate to your Linux VM and in the **Overview**, select **Connect**.   -2. **Connect** to the VM with the SSH client of your choice.  -3. In the terminal window, using `curl`, make a request to the local managed identities for Azure resources endpoint to get an access token for Azure Resource Manager.   +1. In the portal, navigate to your Linux VM and in the **Overview**, select **Connect**. ++2. **Connect** to the VM with the SSH client of your choice. ++3. In the terminal window, using `curl`, make a request to the local managed identities for Azure resources endpoint to get an access token for Azure Resource Manager. +  +The `curl` request for the access token is below. ++```bash +curl 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/' -H Metadata:true +``` ++> [!NOTE] +> The value of the `resource` parameter must be an exact match for what is expected by Microsoft Entra ID. In the case of the Resource Manager resource ID, you must include the trailing slash on the URI. ++The response includes the access token you need to access Azure Resource Manager. ++Response: ++```json +{ + "access_token":"eyJ0eXAiOi...", + "refresh_token":"", + "expires_in":"3599", + "expires_on":"1504130527", + "not_before":"1504126627", + "resource":"https://management.azure.com", + "token_type":"Bearer" +} +``` ++You can use this access token to access Azure Resource Manager, for example to read the details of the Resource Group to which you previously granted this VM access. Replace the values of `<SUBSCRIPTION-ID>`, `<RESOURCE-GROUP>`, and `<ACCESS-TOKEN>` with the ones you created earlier. ++> [!NOTE] +> The URL is case-sensitive, so ensure if you are using the exact same case as you used earlier when you named the Resource Group, and the uppercase “G” in “resourceGroup”.   ++```bash +curl https://management.azure.com/subscriptions/<SUBSCRIPTION-ID>/resourceGroups/<RESOURCE-GROUP>?api-version=2016-09-01 -H "Authorization: Bearer <ACCESS-TOKEN>"  +``` ++The response back with the specific Resource Group information:  - The `curl` request for the access token is below.   - - ```bash - curl 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/' -H Metadata:true    - ``` - - > [!NOTE] - > The value of the “resource” parameter must be an exact match for what is expected by Azure AD.  In the case of the Resource Manager resource ID, you must include the trailing slash on the URI.  - - The response includes the access token you need to access Azure Resource Manager.  - - Response:   -- ```bash - {"access_token":"eyJ0eXAiOi...", - "refresh_token":"", - "expires_in":"3599", - "expires_on":"1504130527", - "not_before":"1504126627", - "resource":"https://management.azure.com", - "token_type":"Bearer"}  - ``` - - You can use this access token to access Azure Resource Manager, for example to read the details of the Resource Group to which you previously granted this VM access. Replace the values of \<SUBSCRIPTION ID\>, \<RESOURCE GROUP\>, and \<ACCESS TOKEN\> with the ones you created earlier.  - - > [!NOTE] - > The URL is case-sensitive, so ensure if you are using the exact same case as you used earlier when you named the Resource Group, and the uppercase “G” in “resourceGroup”.   - - ```bash - curl https://management.azure.com/subscriptions/<SUBSCRIPTION ID>/resourceGroups/<RESOURCE GROUP>?api-version=2016-09-01 -H "Authorization: Bearer <ACCESS TOKEN>"  - ``` - - The response back with the specific Resource Group information:  -   - ```bash - {"id":"/subscriptions/98f51385-2edc-4b79-bed9-7718de4cb861/resourceGroups/DevTest","name":"DevTest","location":"westus","properties":{"provisioningState":"Succeeded"}}  - ``` +```json +{ +"id":"/subscriptions/98f51385-2edc-4b79-bed9-7718de4cb861/resourceGroups/DevTest", +"name":"DevTest", +"location":"westus", +"properties": +{ + "provisioningState":"Succeeded" + } +}  +``` ## Next steps -In this quickstart, you learned how to use a system-assigned managed identity to access the Azure Resource Manager API. To learn more about Azure Resource Manager see: +In this quickstart, you learned how to use a system-assigned managed identity to access the Azure Resource Manager API. For more information about Azure Resource Manager, see: > [!div class="nextstepaction"] >[Azure Resource Manager](../../azure-resource-manager/management/overview.md) |
active-directory | Concept Sign Ins | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-sign-ins.md | Title: Sign-in logs in Microsoft Entra ID -description: Learn about the four types of sign-in logs available in Microsoft Entra Monitoring and health. +description: Learn about the four types of sign-in logs available in Microsoft Entra monitoring and health. |
active-directory | Howto Configure Prerequisites For Reporting Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-configure-prerequisites-for-reporting-api.md | To get access to the reporting data through the API, you need to have one of the In order to access the sign-in reports for a tenant, a Microsoft Entra tenant must have associated Microsoft Entra ID P1 or P2 license. If the directory type is Azure AD B2C, the sign-in reports are accessible through the API without any other license requirement. -Registration is needed even if you're accessing the reporting API using a script. The registration gives you an **Application ID**, which is required for the authorization calls and enables your code to receive tokens. To configure your directory to access the Microsoft Entra ID reporting API, you must sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) in one of the required roles. +Registration is needed even if you're accessing the reporting API using a script. The registration gives you an **Application ID**, which is required for the authorization calls and enables your code to receive tokens. To configure your directory to access the Microsoft Entra reporting API, you must sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) in one of the required roles. > [!IMPORTANT] > Applications running under credentials with administrator privileges can be very powerful, so be sure to keep the application's ID and secret credentials in a secure location. To enable your application to access Microsoft Graph without user intervention, ### Grant permissions -To access the Microsoft Entra ID reporting API, you must grant your app *Read directory data* and *Read all audit log data* permissions for the Microsoft Graph API. +To access the Microsoft Entra reporting API, you must grant your app *Read directory data* and *Read all audit log data* permissions for the Microsoft Graph API. 1. Browse to **Identity** > **Applications** > **App Registrations**. 1. Select **Add a permission**. Once you have the app registration configured, you can run activity log queries ## Access reports using Microsoft Graph PowerShell -To use PowerShell to access the Microsoft Entra ID reporting API, you need to gather a few configuration settings. These settings were created as a part of the [app registration process](#register-an-azure-ad-application). +To use PowerShell to access the Microsoft Entra reporting API, you need to gather a few configuration settings. These settings were created as a part of the [app registration process](#register-an-azure-ad-application). - Tenant ID - Client app ID Programmatic access APIs: <a name='troubleshoot-errors-in-azure-active-directory-reporting-api'></a> -### Troubleshoot errors in Microsoft Entra ID reporting API +### Troubleshoot errors in Microsoft Entra reporting API **500 HTTP internal server error while accessing Microsoft Graph beta endpoint**: We don't currently support the Microsoft Graph beta endpoint - make sure to access the activity logs using the Microsoft Graph v1.0 endpoint. - GET `https://graph.microsoft.com/v1.0/auditLogs/directoryAudits` |
active-directory | Howto Manage Inactive User Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-manage-inactive-user-accounts.md | The last sign-in date and time shown on this tile may take up to 6 hours to upda ## Next steps -* [Get data using the Microsoft Entra ID reporting API with certificates](./howto-configure-prerequisites-for-reporting-api.md) +* [Get data using the Microsoft Entra reporting API with certificates](./howto-configure-prerequisites-for-reporting-api.md) * [Audit API reference](/graph/api/resources/directoryaudit) * [Sign-in activity report API reference](/graph/api/resources/signin) |
active-directory | Howto Stream Logs To Event Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-stream-logs-to-event-hub.md | Download and open the [configuration guide for ArcSight SmartConnector for Azure 1. Complete the steps in the **Prerequisites** section of the ArcSight configuration guide. This section includes the following steps: * Set user permissions in Azure to ensure there's a user with the **owner** role to deploy and configure the connector. * Open ports on the server with Syslog NG Daemon SmartConnector so it's accessible from Azure. - * The deployment runs a Windows PowerShell script, so you must enable PowerShell to run scripts on the machine where you want to deploy the connector. + * The deployment runs a PowerShell script, so you must enable PowerShell to run scripts on the machine where you want to deploy the connector. 1. Follow the steps in the **Deploying the Connector** section of the ArcSight configuration guide to deploy the connector. This section walks you through how to download and extract the connector, configure application properties and run the deployment script from the extracted folder. |
active-directory | Overview Monitoring Health | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-monitoring-health.md | Title: What is Microsoft Entra Monitoring and health? -description: Provides a general overview of Microsoft Entra Monitoring and health. + Title: What is Microsoft Entra monitoring and health? +description: Provides a general overview of Microsoft Entra monitoring and health. -# What is Microsoft Entra Monitoring and health? +# What is Microsoft Entra monitoring and health? -The features of Microsoft Entra Monitoring and health provide a comprehensive view of identity related activity in your environment. This data enables you to: +The features of Microsoft Entra monitoring and health provide a comprehensive view of identity related activity in your environment. This data enables you to: - Determine how your users utilize your apps and services. - Detect potential risks affecting the health of your environment. |
active-directory | Plan Monitoring And Reporting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/plan-monitoring-and-reporting.md | -# Microsoft Entra Monitoring & health deployment dependencies +# Microsoft Entra monitoring and health deployment dependencies Your Microsoft Entra reporting and monitoring solution depends on legal, security, operational requirements, and your environment's processes. Use the following sections to learn about design options and deployment strategy. You'll need a Microsoft Entra ID P1 or P2 license to access the Microsoft Entra For detailed feature and licensing information, see the [Microsoft Entra pricing guide](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). -To deploy Microsoft Entra Monitoring & health you'll need a user who is a Global Administrator or Security Administrator for the Microsoft Entra tenant. +To deploy Microsoft Entra monitoring and health, you'll need a user who is a Global Administrator or Security Administrator for the Microsoft Entra tenant. * [Azure Monitor data platform](../../azure-monitor/data-platform.md) * [Azure Monitor naming and terminology changes](../../azure-monitor/overview.md) To deploy Microsoft Entra Monitoring & health you'll need a user who is a Global <a name='plan-and-deploy-an-azure-ad-reporting-and-monitoring-deployment-project'></a> -## Plan and deploy a Microsoft Entra Monitoring & health deployment project +## Plan and deploy a Microsoft Entra monitoring and health deployment project Reporting and monitoring are used to meet your business requirements, gain insights into usage patterns, and increase your organization's security posture. In this project, you'll define the audiences that will consume and monitor reports, and define your Microsoft Entra monitoring architecture. |
active-directory | Recommendation Migrate From Adal To Msal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-migrate-from-adal-to-msal.md | Title: Migrate from ADAL to MSAL recommendation -description: Learn why you should migrate from the Azure Active Directory Library to the Microsoft Authentication Libraries. +description: Learn why you should migrate from the Azure Active Directory Authentication Library to the Microsoft Authentication Libraries. -# Microsoft Entra recommendation: Migrate from the Azure Active Directory Library to the Microsoft Authentication Libraries +# Microsoft Entra recommendation: Migrate from the Azure Active Directory Authentication Library to the Microsoft Authentication Libraries [Microsoft Entra recommendations](overview-recommendations.md) is a feature that provides you with personalized insights and actionable guidance to align your tenant with recommended best practices. |
active-directory | Reference Audit Activities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-audit-activities.md | Title: Microsoft Entra audit activity reference + Title: Microsoft Entra audit log activity reference description: Get an overview of the audit activities that can be logged in your audit logs in Microsoft Entra ID. |
active-directory | Reference Powershell Reporting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-powershell-reporting.md | -> These PowerShell cmdlets currently only work with the [Microsoft Entra ID Preview](/powershell/module/azuread/?view=azureadps-2.0-preview&preserve-view=true#directory_auditing) Module. Please note that the preview module is not suggested for production use. +> These PowerShell cmdlets currently only work with the [Azure Active Directory PowerShell for Graph Preview](/powershell/module/azuread/?view=azureadps-2.0-preview&preserve-view=true#directory_auditing) module. Please note that the preview module is not suggested for production use. To install the public preview release, use the following: |
active-directory | Reference Reports Data Retention | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-reports-data-retention.md | If you already have activities data with your free license, then you can see it | Sign-ins | Seven days | 30 days | 30 days | | Microsoft Entra multifactor authentication usage | 30 days | 30 days | 30 days | -You can retain the audit and sign-in activity data for longer than the default retention period outlined in the previous table by routing it to an Azure storage account using Azure Monitor. For more information, see [Archive Microsoft Entra ID logs to an Azure storage account](quickstart-azure-monitor-route-logs-to-storage-account.md). +You can retain the audit and sign-in activity data for longer than the default retention period outlined in the previous table by routing it to an Azure storage account using Azure Monitor. For more information, see [Archive Microsoft Entra logs to an Azure storage account](quickstart-azure-monitor-route-logs-to-storage-account.md). **Security signals** You can retain the audit and sign-in activity data for longer than the default r ## Next steps - [Stream logs to an event hub](tutorial-azure-monitor-stream-logs-to-event-hub.md)-- [Learn how to download Microsoft Entra ID logs](howto-download-logs.md)+- [Learn how to download Microsoft Entra logs](howto-download-logs.md) |
active-directory | Delegate By Task | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/delegate-by-task.md | You can further restrict permissions by assigning roles at smaller scopes or by > | Task | Least privileged role | Additional roles | > | - | | - | > | Create Microsoft Entra Domain Services instance | [Application Administrator](permissions-reference.md#application-administrator)<br>[Groups Administrator](permissions-reference.md#groups-administrator)<br> [Domain Services Contributor](../../role-based-access-control/built-in-roles.md#domain-services-contributor)| |-> | Perform all Microsoft Entra Domain Services tasks | [AAD DC Administrators group](../../active-directory-domain-services/tutorial-create-management-vm.md#administrative-tasks-you-can-perform-on-a-managed-domain) | | +> | Perform all Microsoft Entra Domain Services tasks | [AAD DC Administrators group](../../active-directory-domain-services/tutorial-create-management-vm.md#administrative-tasks-you-can-perform-on-a-managed-domain) | | > | Read all configuration | Reader on Azure subscription containing AD DS service | | ## Devices |
active-directory | Cisco Webex Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cisco-webex-provisioning-tutorial.md | -> This tutorial describes a connector built on top of the Microsoft Entra user Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Microsoft Entra ID](../app-provisioning/user-provisioning.md). +> This tutorial describes a connector built on top of the Microsoft Entra user provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Microsoft Entra ID](../app-provisioning/user-provisioning.md). > > This connector is currently in Preview. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all). |
active-directory | Colab Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/colab-tutorial.md | + + Title: Microsoft Entra SSO integration with CoLab +description: Learn how to configure single sign-on between Microsoft Entra ID and CoLab. ++++++++ Last updated : 09/25/2023+++++# Microsoft Entra SSO integration with CoLab ++In this tutorial, you learn how to integrate CoLab with Microsoft Entra ID. When you integrate CoLab with Microsoft Entra ID, you can: ++* Control in Microsoft Entra ID who has access to CoLab. +* Enable your users to be automatically signed-in to CoLab with their Microsoft Entra accounts. +* Manage your accounts in one central location. ++## Prerequisites ++To integrate Microsoft Entra ID with CoLab, you need: ++* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* CoLab single sign-on (SSO) enabled subscription. ++## Scenario description ++In this tutorial, you configure and test Microsoft Entra SSO in a test environment. ++* CoLab supports **SP and IDP** initiated SSO. +* CoLab supports **Just In Time** user provisioning. ++## Adding CoLab from the gallery ++To configure the integration of CoLab into Microsoft Entra ID, you need to add CoLab from the gallery to your list of managed SaaS apps. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**. +1. In the **Add from the gallery** section, type **CoLab** in the search box. +1. Select **CoLab** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) ++## Configure and test Microsoft Entra SSO for CoLab ++Configure and test Microsoft Entra SSO with CoLab using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in CoLab. ++To configure and test Microsoft Entra SSO with CoLab, perform the following steps: ++1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature. + 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon. + 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on. +1. **[Configure CoLab SSO](#configure-colab-sso)** - to configure the single sign-on settings on application side. + 1. **[Create CoLab test user](#create-colab-test-user)** - to have a counterpart of B.Simon in CoLab that is linked to the Microsoft Entra ID representation of user. +1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ++## Configure Microsoft Entra SSO ++Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **CoLab** > **Single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. ++ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration") ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type a value using the following pattern: + `urn:auth0:colab-production:<customer>` ++ b. In the **Reply URL** textbox, type a URL using the following pattern: + ` https://login.colabsoftware.com/login/callback?connection=<Customer>` ++1. If you wish to configure the application in **SP** initiated mode, then perform the following step: ++ In the **Sign on URL** textbox, type the URL: + `https://app.colabsoftware.com/` ++ > [!NOTE] + > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [CoLab support team](mailto:support@colabsoftware.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Microsoft Entra admin center. ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. ++ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate") ++1. On the **Set up CoLab** section, copy the appropriate URL(s) based on your requirement. ++ ![Screenshot shows to copy configuration URLs.](common/copy-configuration-urls.png "Metadata") ++### Create a Microsoft Entra ID test user ++In this section, you create a test user in the Microsoft Entra admin center called B.Simon. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator). +1. Browse to **Identity** > **Users** > **All users**. +1. Select **New user** > **Create new user**, at the top of the screen. +1. In the **User** properties, follow these steps: + 1. In the **Display name** field, enter `B.Simon`. + 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`. + 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box. + 1. Select **Review + create**. +1. Select **Create**. ++### Assign the Microsoft Entra ID test user ++In this section, you enable B.Simon to use Microsoft Entra single sign-on by granting access to CoLab. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **CoLab**. +1. In the app's overview page, select **Users and groups**. +1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog. + 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. + 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. + 1. In the **Add Assignment** dialog, click the **Assign** button. ++## Configure CoLab SSO ++To configure single sign-on on **CoLab** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Microsoft Entra admin center to [CoLab support team](mailto:support@colabsoftware.com). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create CoLab test user ++In this section, a user called B.Simon is created in CoLab. CoLab supports just-in-time provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in CoLab, a new one is created when you attempt to access CoLab. ++## Test SSO ++In this section, you test your Microsoft Entra single sign-on configuration with following options. + +#### SP initiated: + +* Click on **Test this application** in Microsoft Entra admin center. This will redirect to CoLab Sign on URL where you can initiate the login flow. + +* Go to CoLab Sign-on URL directly and initiate the login flow from there. + +#### IDP initiated: + +* Click on **Test this application** in Microsoft Entra admin center and you should be automatically signed in to the CoLab for which you set up the SSO. + +You can also use Microsoft My Apps to test the application in any mode. When you click the CoLab tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the CoLab for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Next Steps ++Once you configure CoLab you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app). |
active-directory | F5 Big Ip Headers Easy Button | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/f5-big-ip-headers-easy-button.md | The Service Provider settings define the properties for the SAML SP instance of ### Microsoft Entra ID -This section defines all properties that you would normally use to manually configure a new BIG-IP SAML application within your Microsoft Entra tenant. Easy Button provides a set of pre-defined application templates for Oracle PeopleSoft, Oracle E-business Suite, Oracle JD Edwards, SAP ERP as well as generic SHA template for any other apps. For this scenario select **F5 BIG-IP APM Microsoft Entra Integration > Add**. +This section defines all properties that you would normally use to manually configure a new BIG-IP SAML application within your Microsoft Entra tenant. Easy Button provides a set of pre-defined application templates for Oracle PeopleSoft, Oracle E-business Suite, Oracle JD Edwards, SAP ERP as well as generic SHA template for any other apps. For this scenario, select **F5 BIG-IP APM Azure AD Integration > Add**. ![Screenshot for Azure configuration add BIG-IP application.](./media/f5-big-ip-headers-easy-button/azure-configuration-add-app.png) |
active-directory | Flock Safety Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/flock-safety-tutorial.md | + + Title: Microsoft Entra SSO integration with Flock Safety +description: Learn how to configure single sign-on between Microsoft Entra ID and Flock Safety. ++++++++ Last updated : 09/25/2023+++++# Microsoft Entra SSO integration with Flock Safety ++In this tutorial, you learn how to integrate Flock Safety with Microsoft Entra ID. When you integrate Flock Safety with Microsoft Entra ID, you can: ++* Control in Microsoft Entra ID who has access to Flock Safety. +* Enable your users to be automatically signed-in to Flock Safety with their Microsoft Entra accounts. +* Manage your accounts in one central location. ++## Prerequisites ++To integrate Microsoft Entra ID with Flock Safety, you need: ++* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Flock Safety single sign-on (SSO) enabled subscription. ++## Scenario description ++In this tutorial, you configure and test Microsoft Entra SSO in a test environment. ++* Flock Safety supports **SP** initiated SSO. +* Flock Safety supports **Just In Time** user provisioning. ++## Adding Flock Safety from the gallery ++To configure the integration of Flock Safety into Microsoft Entra ID, you need to add Flock Safety from the gallery to your list of managed SaaS apps. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**. +1. In the **Add from the gallery** section, type **Flock Safety** in the search box. +1. Select **Flock Safety** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) ++## Configure and test Microsoft Entra SSO for Flock Safety ++Configure and test Microsoft Entra SSO with Flock Safety using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in Flock Safety. ++To configure and test Microsoft Entra SSO with Flock Safety, perform the following steps: ++1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature. + 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon. + 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on. +1. **[Configure Flock Safety SSO](#configure-flock-safety-sso)** - to configure the single sign-on settings on application side. + 1. **[Create Flock Safety test user](#create-flock-safety-test-user)** - to have a counterpart of B.Simon in Flock Safety that is linked to the Microsoft Entra ID representation of user. +1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ++## Configure Microsoft Entra SSO ++Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Flock Safety** > **Single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. ++ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration") ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type a value using the following pattern: + `urn:auth0:prod-flock:<ID>` ++ b. In the **Reply URL** textbox, type a URL using the following pattern: + `https://login.flocksafety.com/login/callback?connection=<ID>` ++ c. In the **Sign on URL** textbox, type a URL using the following pattern: + `https://users.flocksafety.com/sso-login/<CustomName>` ++ > [!NOTE] + > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Flock Safety support team](mailto:support@flocksafety.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Microsoft Entra admin center. ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. ++ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate") ++1. On the **Set up Flock Safety** section, copy the appropriate URL(s) based on your requirement. ++ ![Screenshot shows to copy configuration URLs.](common/copy-configuration-urls.png "Metadata") ++### Create a Microsoft Entra ID test user ++In this section, you create a test user in the Microsoft Entra admin center called B.Simon. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator). +1. Browse to **Identity** > **Users** > **All users**. +1. Select **New user** > **Create new user**, at the top of the screen. +1. In the **User** properties, follow these steps: + 1. In the **Display name** field, enter `B.Simon`. + 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`. + 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box. + 1. Select **Review + create**. +1. Select **Create**. ++### Assign the Microsoft Entra ID test user ++In this section, you enable B.Simon to use Microsoft Entra single sign-on by granting access to Flock Safety. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Flock Safety**. +1. In the app's overview page, select **Users and groups**. +1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog. + 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. + 1. If you're expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. + 1. In the **Add Assignment** dialog, click the **Assign** button. ++## Configure Flock Safety SSO ++To configure single sign-on on **Flock Safety** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Microsoft Entra admin center to [Flock Safety support team](mailto:support@flocksafety.com). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create Flock Safety test user ++In this section, a user called Britta Simon is created in Flock Safety. Flock Safety supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Flock Safety, a new one is created after authentication. ++## Test SSO ++In this section, you test your Microsoft Entra single sign-on configuration with following options. + +* Click on **Test this application** in Microsoft Entra admin center. This will redirect to Flock Safety Sign-on URL where you can initiate the login flow. + +* Go to Flock Safety Sign-on URL directly and initiate the login flow from there. + +* You can use Microsoft My Apps. When you click the Flock Safety tile in the My Apps, this will redirect to Flock Safety Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Next Steps ++Once you configure Flock Safety you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app). |
active-directory | Glia Hub Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/glia-hub-tutorial.md | + + Title: Microsoft Entra SSO integration with Glia Hub +description: Learn how to configure single sign-on between Microsoft Entra ID and Glia Hub. ++++++++ Last updated : 09/25/2023+++++# Microsoft Entra SSO integration with Glia Hub ++In this tutorial, you learn how to integrate Glia Hub with Microsoft Entra ID. When you integrate Glia Hub with Microsoft Entra ID, you can: ++* Control in Microsoft Entra ID who has access to Glia Hub. +* Enable your users to be automatically signed-in to Glia Hub with their Microsoft Entra accounts. +* Manage your accounts in one central location. ++## Prerequisites ++To integrate Microsoft Entra ID with Glia Hub, you need: ++* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Glia Hub single sign-on (SSO) enabled subscription. ++## Scenario description ++In this tutorial, you configure and test Microsoft Entra SSO in a test environment. ++* Glia Hub supports **SP and IDP** initiated SSO. ++## Adding Glia Hub from the gallery ++To configure the integration of Glia Hub into Microsoft Entra ID, you need to add Glia Hub from the gallery to your list of managed SaaS apps. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**. +1. In the **Add from the gallery** section, type **Glia Hub** in the search box. +1. Select **Glia Hub** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) ++## Configure and test Microsoft Entra SSO for Glia Hub ++Configure and test Microsoft Entra SSO with Glia Hub using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in Glia Hub. ++To configure and test Microsoft Entra SSO with Glia Hub, perform the following steps: ++1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature. + 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon. + 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on. +1. **[Configure Glia Hub SSO](#configure-glia-hub-sso)** - to configure the single sign-on settings on application side. + 1. **[Create Glia Hub test user](#create-glia-hub-test-user)** - to have a counterpart of B.Simon in Glia Hub that is linked to the Microsoft Entra ID representation of user. +1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ++## Configure Microsoft Entra SSO ++Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Glia Hub** > **Single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. ++ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration") ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ 1. In the **Identifier** textbox, type a URL using the following pattern: + `https://<CustomerName>.app.glia.com` ++ 1. In the **Reply URL** textbox, type a URL using the following pattern: + `https://<CustomerName>.app.glia.com/saml/acs` ++ 1. In the **Relay State** textbox, type a URL using the following pattern: + `https://<CustomerName>.app.glia.com` ++ 1. In the **Logout Url** textbox, type a URL using the following pattern: + `https://<CustomerName>.app.glia.com/saml/logout` ++1. Perform the following step, if you wish to configure the application in **SP** initiated mode: ++ 1. In the **Sign on URL** textbox, type a URL using the following pattern: + `https://<CustomerName>.app.glia.com` ++ > [!NOTE] + > These values are not real. Update these values with the actual Identifier, Reply URL, Sign on URL, Relay State and Logout Url. Contact [Glia Hub support team](mailto:support@glia.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Microsoft Entra admin center. ++1. Glia Hub application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. ++ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image") ++1. In addition to above, Glia Hub application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements. + + | Name | Source Attribute| + | | | + | idp_name_attribute | user.userprincipalname | ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. ++ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate") ++### Create a Microsoft Entra ID test user ++In this section, you create a test user in the Microsoft Entra admin center called B.Simon. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator). +1. Browse to **Identity** > **Users** > **All users**. +1. Select **New user** > **Create new user**, at the top of the screen. +1. In the **User** properties, follow these steps: + 1. In the **Display name** field, enter `B.Simon`. + 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`. + 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box. + 1. Select **Review + create**. +1. Select **Create**. ++### Assign the Microsoft Entra ID test user ++In this section, you enable B.Simon to use Microsoft Entra single sign-on by granting access to Glia Hub. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Glia Hub**. +1. In the app's overview page, select **Users and groups**. +1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog. + 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. + 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. + 1. In the **Add Assignment** dialog, click the **Assign** button. ++## Configure Glia Hub SSO ++To configure single sign-on on **Glia Hub** side, you need to send the **App Federation Metadata Url** to [Glia Hub support team](mailto:support@glia.com). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create Glia Hub test user ++In this section, you create a user called B.Simon in Glia Hub. Work with [Glia Hub support team](mailto:support@glia.com) to add the users in the Glia Hub platform. Users must be created and activated before you use single sign-on. ++## Test SSO ++In this section, you test your Microsoft Entra single sign-on configuration with following options. + +#### SP initiated: + +* Click on **Test this application** in Microsoft Entra admin center. This will redirect to Glia Hub Sign on URL where you can initiate the login flow. + +* Go to Glia Hub Sign-on URL directly and initiate the login flow from there. + +#### IDP initiated: + +* Click on **Test this application** in Microsoft Entra admin center and you should be automatically signed in to the Glia Hub for which you set up the SSO. + +You can also use Microsoft My Apps to test the application in any mode. When you click the Glia Hub tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Glia Hub for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Next Steps ++Once you configure Glia Hub you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app). |
active-directory | Granite Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/granite-tutorial.md | + + Title: Microsoft Entra SSO integration with Granite +description: Learn how to configure single sign-on between Microsoft Entra ID and Granite. ++++++++ Last updated : 09/25/2023+++++# Microsoft Entra SSO integration with Granite ++In this tutorial, you learn how to integrate Granite with Microsoft Entra ID. When you integrate Granite with Microsoft Entra ID, you can: ++* Control in Microsoft Entra ID who has access to Granite. +* Enable your users to be automatically signed-in to Granite with their Microsoft Entra accounts. +* Manage your accounts in one central location. ++## Prerequisites ++To integrate Microsoft Entra ID with Granite, you need: ++* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Granite single sign-on (SSO) enabled subscription. ++## Scenario description ++In this tutorial, you configure and test Microsoft Entra SSO in a test environment. ++* Granite supports **SP** initiated SSO. +* Granite supports **Just In Time** user provisioning. ++## Adding Granite from the gallery ++To configure the integration of Granite into Microsoft Entra ID, you need to add Granite from the gallery to your list of managed SaaS apps. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**. +1. In the **Add from the gallery** section, type **Granite** in the search box. +1. Select **Granite** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) ++## Configure and test Microsoft Entra SSO for Granite ++Configure and test Microsoft Entra SSO with Granite using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in Granite. ++To configure and test Microsoft Entra SSO with Granite, perform the following steps: ++1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature. + 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon. + 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on. +1. **[Configure Granite SSO](#configure-granite-sso)** - to configure the single sign-on settings on application side. + 1. **[Create Granite test user](#create-granite-test-user)** - to have a counterpart of B.Simon in Granite that is linked to the Microsoft Entra ID representation of user. +1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ++## Configure Microsoft Entra SSO ++Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Granite** > **Single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. ++ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration") ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type a value using the following pattern: + `<Customer_Name>.granitegrc.com` ++ b. In the **Reply URL** textbox, type a URL using the following pattern: + `https://<Customer_Name>.granitegrc.com/simplesaml/module.php/saml/sp/saml2-acs.php/default` ++ c. In the **Sign on URL** textbox, type a URL using the following pattern: + `https://<Customer_Name>.granitegrc.com` ++ > [!NOTE] + > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Granite support team](mailto:support@granitegrc.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Microsoft Entra admin center. ++1. Granite application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. ++ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image") ++1. In addition to above, Granite application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements. + + | Name | Source Attribute| + | | | + | mail | user.mail | + | username | user.userprincipalname | + | groups | user.groups | + | company | user.companyname | + | department | user.department | + | objectid | user.objectid | ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. ++ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate") ++### Create a Microsoft Entra ID test user ++In this section, you create a test user in the Microsoft Entra admin center called B.Simon. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator). +1. Browse to **Identity** > **Users** > **All users**. +1. Select **New user** > **Create new user**, at the top of the screen. +1. In the **User** properties, follow these steps: + 1. In the **Display name** field, enter `B.Simon`. + 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`. + 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box. + 1. Select **Review + create**. +1. Select **Create**. ++### Assign the Microsoft Entra ID test user ++In this section, you enable B.Simon to use Microsoft Entra single sign-on by granting access to Granite. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Granite**. +1. In the app's overview page, select **Users and groups**. +1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog. + 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. + 1. If you're expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. + 1. In the **Add Assignment** dialog, click the **Assign** button. ++## Configure Granite SSO ++To configure single sign-on on **Granite** side, you need to send the **App Federation Metadata Url** to [Granite support team](mailto:support@granitegrc.com). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create Granite test user ++In this section, a user called B.Simon is created in Granite. Granite supports just-in-time provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Granite, a new one is created when you attempt to access Granite. ++## Test SSO ++In this section, you test your Microsoft Entra single sign-on configuration with following options. + +* Click on **Test this application** in Microsoft Entra admin center. This will redirect to Granite Sign on URL where you can initiate the login flow. + +* Go to Granite Sign-on URL directly and initiate the login flow from there. + +* You can use Microsoft My Apps. When you click the Granite tile in the My Apps, this will redirect to Granite Sign on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Next Steps ++Once you configure Granite you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app). |
active-directory | Guru Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/guru-tutorial.md | + + Title: Microsoft Entra SSO integration with Guru +description: Learn how to configure single sign-on between Microsoft Entra ID and Guru. ++++++++ Last updated : 09/25/2023+++++# Microsoft Entra SSO integration with Guru ++In this tutorial, you learn how to integrate Guru with Microsoft Entra ID. When you integrate Guru with Microsoft Entra ID, you can: ++* Control in Microsoft Entra ID who has access to Guru. +* Enable your users to be automatically signed-in to Guru with their Microsoft Entra accounts. +* Manage your accounts in one central location. ++## Prerequisites ++To integrate Microsoft Entra ID with Guru, you need: ++* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Guru single sign-on (SSO) enabled subscription. ++## Scenario description ++In this tutorial, you configure and test Microsoft Entra SSO in a test environment. ++* Guru supports **IDP** initiated SSO. +* Guru supports **Just In Time** user provisioning. ++## Adding Guru from the gallery ++To configure the integration of Guru into Microsoft Entra ID, you need to add Guru from the gallery to your list of managed SaaS apps. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**. +1. In the **Add from the gallery** section, type **Guru** in the search box. +1. Select **Guru** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure and test Microsoft Entra SSO for Guru ++Configure and test Microsoft Entra SSO with Guru using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in Guru. ++To configure and test Microsoft Entra SSO with Guru, perform the following steps: ++1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature. + 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon. + 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on. +1. **[Configure Guru SSO](#configure-guru-sso)** - to configure the single sign-on settings on application side. + 1. **[Create Guru test user](#create-guru-test-user)** - to have a counterpart of B.Simon in Guru that is linked to the Microsoft Entra ID representation of user. +1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ++## Configure Microsoft Entra SSO ++Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Guru** > **Single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. ++ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration") ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type a value using the following pattern: + `getguru.com/<TeamID>` ++ b. In the **Reply URL** textbox, type a URL using the following pattern: + `https://api.getguru.com/samlsso/<TeamID>` ++ > [!NOTE] + > These values are not real. Update these values with the actual Identifier and Reply URL. You can get `TeamID` from **[Configure Guru SSO](#configure-guru-sso)** section. If you have any queries, please contact [Guru support team](mailto:support@getguru.com). You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Microsoft Entra admin center. ++1. Guru application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. ++ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image") ++1. In addition to above, Guru application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements. + + | Name | Source Attribute| + | | | + | firstName | user.givenname | + | lastName | user.surname | ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. ++ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate") ++1. On the **Set up Guru** section, copy the appropriate URL(s) based on your requirement. ++ ![Screenshot shows to copy configuration URLs.](common/copy-configuration-urls.png "Metadata") ++### Create a Microsoft Entra ID test user ++In this section, you create a test user in the Microsoft Entra admin center called B.Simon. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator). +1. Browse to **Identity** > **Users** > **All users**. +1. Select **New user** > **Create new user**, at the top of the screen. +1. In the **User** properties, follow these steps: + 1. In the **Display name** field, enter `B.Simon`. + 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`. + 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box. + 1. Select **Review + create**. +1. Select **Create**. ++### Assign the Microsoft Entra ID test user ++In this section, you enable B.Simon to use Microsoft Entra single sign-on by granting access to Guru. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Guru**. +1. In the app's overview page, select **Users and groups**. +1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog. + 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. + 1. If you're expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. + 1. In the **Add Assignment** dialog, click the **Assign** button. ++## Configure Guru SSO ++1. Log in to your Guru company site as an administrator. ++1. Go to **Settings** > **Apps and Integrations** and click **SSO/SCIM**. ++1. In the **SSO/SCIM** section, perform the following steps: ++ ![Screenshot shows the administration portal.](media/guru-tutorial/manage.png "Admin") ++ 1. Copy **Team ID** and save it to your computer. ++ 1. Copy **Single Sign On Url**, paste this value into the **Reply URL** text box in the **Basic SAML Configuration** section in the Microsoft Entra admin center. ++ 1. Copy **Audience URI**, paste this value into the **Identifier** text box in the **Basic SAML Configuration** section in the Microsoft Entra admin center. ++ 1. In the **Identity Provider Single Sign-On Url** textbox, paste the **Login URL** value, which you have copied from the Microsoft Entra admin center. ++ 1. In the **Identity Provider Issuer** textbox, paste the **Microsoft Entra ID Identifier** value, which you have copied from the Microsoft Entra admin center. ++ 1. Open the downloaded **Certificate (Base64)** from the Microsoft Entra admin center into Notepad and paste the content into the **X.509 Certificate** textbox. ++ 1. Click **Enable SSO**. ++### Create Guru test user ++In this section, a user called B.Simon is created in Guru. Guru supports just-in-time provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Guru, a new one is created when you attempt to access Guru. ++## Test SSO ++In this section, you test your Microsoft Entra single sign-on configuration with following options. + +* Click on Test this application in Microsoft Entra admin center and you should be automatically signed in to the Guru for which you set up the SSO. + +* You can use Microsoft My Apps. When you click the Guru tile in the My Apps, you should be automatically signed in to the Guru for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Next Steps ++Once you configure Guru you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app). |
active-directory | Insightly Saml Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/insightly-saml-tutorial.md | + + Title: Microsoft Entra SSO integration with Insightly SAML +description: Learn how to configure single sign-on between Microsoft Entra ID and Insightly SAML. ++++++++ Last updated : 09/25/2023+++++# Microsoft Entra SSO integration with Insightly SAML ++In this tutorial, you learn how to integrate Insightly SAML with Microsoft Entra ID. When you integrate Insightly SAML with Microsoft Entra ID, you can: ++* Control in Microsoft Entra ID who has access to Insightly SAML. +* Enable your users to be automatically signed-in to Insightly SAML with their Microsoft Entra accounts. +* Manage your accounts in one central location. ++## Prerequisites ++To integrate Microsoft Entra ID with Insightly SAML, you need: ++* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Insightly SAML single sign-on (SSO) enabled subscription. ++## Scenario description ++In this tutorial, you configure and test Microsoft Entra SSO in a test environment. ++* Insightly SAML supports **IDP** initiated SSO. ++## Adding Insightly SAML from the gallery ++To configure the integration of Insightly SAML into Microsoft Entra ID, you need to add Insightly SAML from the gallery to your list of managed SaaS apps. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**. +1. In the **Add from the gallery** section, type **Insightly SAML** in the search box. +1. Select **Insightly SAML** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) ++## Configure and test Microsoft Entra SSO for Insightly SAML ++Configure and test Microsoft Entra SSO with Insightly SAML using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in Insightly SAML. ++To configure and test Microsoft Entra SSO with Insightly SAML, perform the following steps: ++1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature. + 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon. + 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on. +1. **[Configure Insightly SAML SSO](#configure-insightly-saml-sso)** - to configure the single sign-on settings on application side. + 1. **[Create Insightly SAML test user](#create-insightly-saml-test-user)** - to have a counterpart of B.Simon in Insightly SAML that is linked to the Microsoft Entra ID representation of user. +1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ++## Configure Microsoft Entra SSO ++Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Insightly SAML** > **Single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. ++ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration") ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type a URL using one of the following patterns: ++ | **Identifier** | + || + | `https://crm.na1.insightly.com/user/saml?instanceId=<ID>` | + | `https://crm.au1.insightly.com/user/saml?instanceId=<ID>` | ++ b. In the **Reply URL** textbox, type a URL using one of the following patterns: ++ | **Reply URL** | + || + | `https://crm.na1.insightly.com/user/saml?instanceId=<ID>` | + | `https://crm.au1.insightly.com/user/saml?instanceId=<ID>` | ++ > [!NOTE] + > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Insightly SAML support team](mailto:support@insight.ly) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Microsoft Entra admin center. ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. ++ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate") ++1. On the **Set up Insightly SAML** section, copy the appropriate URL(s) based on your requirement. ++ ![Screenshot shows to copy configuration URLs.](common/copy-configuration-urls.png "Metadata") ++### Create a Microsoft Entra ID test user ++In this section, you create a test user in the Microsoft Entra admin center called B.Simon. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator). +1. Browse to **Identity** > **Users** > **All users**. +1. Select **New user** > **Create new user**, at the top of the screen. +1. In the **User** properties, follow these steps: + 1. In the **Display name** field, enter `B.Simon`. + 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`. + 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box. + 1. Select **Review + create**. +1. Select **Create**. ++### Assign the Microsoft Entra ID test user ++In this section, you enable B.Simon to use Microsoft Entra single sign-on by granting access to Insightly SAML. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Insightly SAML**. +1. In the app's overview page, select **Users and groups**. +1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog. + 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. + 1. If you're expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. + 1. In the **Add Assignment** dialog, click the **Assign** button. ++## Configure Insightly SAML SSO ++To configure single sign-on on **Insightly SAML** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Microsoft Entra admin center to [Insightly SAML support team](mailto:support@insight.ly). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create Insightly SAML test user ++In this section, you create a user called B.Simon in Insightly SAML. Work with [Insightly SAML support team](mailto:support@insight.ly) to add the users in the Insightly SAML platform. Users must be created and activated before you use single sign-on. ++## Test SSO ++In this section, you test your Microsoft Entra single sign-on configuration with following options. + +* Click on Test this application in Microsoft Entra admin center and you should be automatically signed in to the Insightly SAML for which you set up the SSO. + +* You can use Microsoft My Apps. When you click the Insightly SAML tile in the My Apps, you should be automatically signed in to the Insightly SAML for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Next steps ++Once you configure Insightly SAML you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app). |
active-directory | Insightsfirst Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/insightsfirst-tutorial.md | + + Title: Microsoft Entra SSO integration with Insightsfirst +description: Learn how to configure single sign-on between Microsoft Entra ID and Insightsfirst. ++++++++ Last updated : 09/25/2023+++++# Microsoft Entra SSO integration with Insightsfirst ++In this tutorial, you learn how to integrate Insightsfirst with Microsoft Entra ID. When you integrate Insightsfirst with Microsoft Entra ID, you can: ++* Control in Microsoft Entra ID who has access to Insightsfirst. +* Enable your users to be automatically signed-in to Insightsfirst with their Microsoft Entra accounts. +* Manage your accounts in one central location. ++## Prerequisites ++To integrate Microsoft Entra ID with Insightsfirst, you need: ++* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Insightsfirst single sign-on (SSO) enabled subscription. ++## Scenario description ++In this tutorial, you configure and test Microsoft Entra SSO in a test environment. ++* Insightsfirst supports **SP** initiated SSO. +* Insightsfirst supports **Just In Time** user provisioning. ++> [!NOTE] +> Identifier of this application is a fixed string value so only one instance can be configured in one tenant. ++## Adding Insightsfirst from the gallery ++To configure the integration of Insightsfirst into Microsoft Entra ID, you need to add Insightsfirst from the gallery to your list of managed SaaS apps. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**. +1. In the **Add from the gallery** section, type **Insightsfirst** in the search box. +1. Select **Insightsfirst** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure and test Microsoft Entra SSO for Insightsfirst ++Configure and test Microsoft Entra SSO with Insightsfirst using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in Insightsfirst. ++To configure and test Microsoft Entra SSO with Insightsfirst, perform the following steps: ++1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature. + 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon. + 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on. +1. **[Configure Insightsfirst SSO](#configure-insightsfirst-sso)** - to configure the single sign-on settings on application side. + 1. **[Create Insightsfirst test user](#create-insightsfirst-test-user)** - to have a counterpart of B.Simon in Insightsfirst that is linked to the Microsoft Entra ID representation of user. +1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ++## Configure Microsoft Entra SSO ++Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Insightsfirst** > **Single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. ++ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration") ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type one of the following URLs: ++ | **Identifier** | + || + | `https://insightsfirst-implementation.evalueserve.com` | + | `https://insightsfirst.evalueserve.com/` | ++ b. In the **Reply URL** textbox, type one of the following URLs: ++ | **Reply URL** | + || + | `https://insightsfirst-implementation.evalueserve.com/InsightFirstSSO/api/Assertion/ConsumerService` | + | `https://insightsfirst.evalueserve.com/InsightFirstSSO/api/Assertion/ConsumerService` | ++ c. In the **Sign on URL** textbox, type one of the following URLs: ++ | **Sign on URL** | + || + | `https://insightsfirst.evalueserve.com/Microsoft` | + | `https://insightsfirst-implementation.evalueserve.com/Microsoft` | ++1. Insightsfirst application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. ++ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image") ++1. In addition to above, Insightsfirst application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements. + + | Name | Source Attribute| + | | | + | Email | user.mail | ++1. In the **SAML Signing Certificate** section, click **Edit** button to open **SAML Signing Certificate** dialog. ++ ![Screenshot shows to Edit SAML Signing Certificate.](common/edit-certificate.png "Certificate") ++1. In the **SAML Signing Certificate** section, copy the **Thumbprint Value** and save it on your computer. ++ ![Screenshot shows to Copy Thumbprint value.](common/copy-thumbprint.png "Thumbprint") ++1. On the **Set up Insightsfirst** section, copy the appropriate URL(s) based on your requirement. ++ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata") ++### Create a Microsoft Entra ID test user ++In this section, you create a test user in the Microsoft Entra admin center called B.Simon. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator). +1. Browse to **Identity** > **Users** > **All users**. +1. Select **New user** > **Create new user**, at the top of the screen. +1. In the **User** properties, follow these steps: + 1. In the **Display name** field, enter `B.Simon`. + 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`. + 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box. + 1. Select **Review + create**. +1. Select **Create**. ++### Assign the Microsoft Entra ID test user ++In this section, you enable B.Simon to use Microsoft Entra single sign-on by granting access to Insightsfirst. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Insightsfirst**. +1. In the app's overview page, select **Users and groups**. +1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog. + 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. + 1. If you're expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. + 1. In the **Add Assignment** dialog, click the **Assign** button. ++## Configure Insightsfirst SSO ++To configure single sign-on on **Insightsfirst** side, you need to send the **Thumbprint Value** and appropriate copied URLs from Microsoft Entra admin center to [Insightsfirst support team](mailto:insightsfirst.support@evalueserve.com). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create Insightsfirst test user ++In this section, a user called Britta Simon is created in Insightsfirst. Insightsfirst supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Insightsfirst, a new one is created after authentication. ++## Test SSO ++In this section, you test your Microsoft Entra single sign-on configuration with following options. + +* Click on **Test this application** in Microsoft Entra admin center. This will redirect to Insightsfirst Sign-on URL where you can initiate the login flow. + +* Go to Insightsfirst Sign-on URL directly and initiate the login flow from there. + +* You can use Microsoft My Apps. When you click the Insightsfirst tile in the My Apps, this will redirect to Insightsfirst Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Next Steps ++Once you configure Insightsfirst you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app). |
active-directory | Mic Saas Portal Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mic-saas-portal-tutorial.md | + + Title: Microsoft Entra SSO integration with MIC SAAS Portal +description: Learn how to configure single sign-on between Microsoft Entra ID and MIC SAAS Portal. ++++++++ Last updated : 09/25/2023+++++# Microsoft Entra SSO integration with MIC SAAS Portal ++In this tutorial, you learn how to integrate MIC SAAS Portal with Microsoft Entra ID. When you integrate MIC SAAS Portal with Microsoft Entra ID, you can: ++* Control in Microsoft Entra ID who has access to MIC SAAS Portal. +* Enable your users to be automatically signed-in to MIC SAAS Portal with their Microsoft Entra accounts. +* Manage your accounts in one central location. ++## Prerequisites ++To integrate Microsoft Entra ID with MIC SAAS Portal, you need: ++* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* MIC SAAS Portal single sign-on (SSO) enabled subscription. ++## Scenario description ++In this tutorial, you configure and test Microsoft Entra SSO in a test environment. ++* MIC SAAS Portal supports **SP** initiated SSO. +* MIC SAAS Portal supports **Just In Time** user provisioning. ++## Adding MIC SAAS Portal from the gallery ++To configure the integration of MIC SAAS Portal into Microsoft Entra ID, you need to add MIC SAAS Portal from the gallery to your list of managed SaaS apps. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**. +1. In the **Add from the gallery** section, type **MIC SAAS Portal** in the search box. +1. Select **MIC SAAS Portal** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) ++## Configure and test Microsoft Entra SSO for MIC SAAS Portal ++Configure and test Microsoft Entra SSO with MIC SAAS Portal using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in MIC SAAS Portal. ++To configure and test Microsoft Entra SSO with MIC SAAS Portal, perform the following steps: ++1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature. + 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon. + 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on. +1. **[Configure MIC SAAS Portal SSO](#configure-mic-saas-portal-sso)** - to configure the single sign-on settings on application side. + 1. **[Create MIC SAAS Portal test user](#create-mic-saas-portal-test-user)** - to have a counterpart of B.Simon in MIC SAAS Portal that is linked to the Microsoft Entra ID representation of user. +1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ++## Configure Microsoft Entra SSO ++Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **MIC SAAS Portal** > **Single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. ++ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration") ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type a URL using the following pattern: + `https://sso.eu.micgtm.com/auth/realms/<INSTANCE>` ++ b. In the **Reply URL** textbox, type a URL using the following pattern: + `https://sso.eu.micgtm.com/auth/realms/<INSTANCE>/broker/<PROVIDER>/endpoint` ++ c. In the **Sign on URL** textbox, type a URL using the following pattern: + `https://gtmportal.eu.micgtm.com/?idp=<INSTANCE>` ++ > [!NOTE] + > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [MIC SAAS Portal support team](mailto:support@mic-cust.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Microsoft Entra admin center. ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. ++ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate") ++1. On the **Set up MIC SAAS Portal** section, copy the appropriate URL(s) based on your requirement. ++ ![Screenshot shows to copy configuration URLs.](common/copy-configuration-urls.png "Metadata") ++### Create a Microsoft Entra ID test user ++In this section, you create a test user in the Microsoft Entra admin center called B.Simon. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator). +1. Browse to **Identity** > **Users** > **All users**. +1. Select **New user** > **Create new user**, at the top of the screen. +1. In the **User** properties, follow these steps: + 1. In the **Display name** field, enter `B.Simon`. + 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`. + 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box. + 1. Select **Review + create**. +1. Select **Create**. ++### Assign the Microsoft Entra ID test user ++In this section, you enable B.Simon to use Microsoft Entra single sign-on by granting access to MIC SAAS Portal. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **MIC SAAS Portal**. +1. In the app's overview page, select **Users and groups**. +1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog. + 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. + 1. If you're expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. + 1. In the **Add Assignment** dialog, click the **Assign** button. ++## Configure MIC SAAS Portal SSO ++To configure single sign-on on **MIC SAAS Portal** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Microsoft Entra admin center to [MIC SAAS Portal support team](mailto:support@mic-cust.com). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create MIC SAAS Portal test user ++In this section, a user called Britta Simon is created in MIC SAAS Portal. MIC SAAS Portal supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in MIC SAAS Portal, a new one is created after authentication. ++## Test SSO ++In this section, you test your Microsoft Entra single sign-on configuration with following options. + +* Click on **Test this application** in Microsoft Entra admin center. This will redirect to MIC SAAS Portal Sign-on URL where you can initiate the login flow. + +* Go to MIC SAAS Portal Sign-on URL directly and initiate the login flow from there. + +* You can use Microsoft My Apps. When you click the MIC SAAS Portal tile in the My Apps, this will redirect to MIC SAAS Portal Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Next Steps ++Once you configure MIC SAAS Portal you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app). |
active-directory | Parallels Desktop Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/parallels-desktop-tutorial.md | To configure single sign-on on **Parallels Desktop** side, follow the latest ver ### Create Parallels Desktop test user -Add existing user accounts to the Admin or User groups on the Azure AD side, following Parallels's Azure SSO setup guide that can be found on [this page](https://kb.parallels.com/en/129240). When a user account gets deactivated following their departure from the organization, that is immediately reflected in the user count of the Parallels's product license. +Add existing user accounts to the Admin or User groups on the Microsoft Entra ID side, following Parallels's Azure SSO setup guide that can be found on [this page](https://kb.parallels.com/en/129240). When a user account gets deactivated following their departure from the organization, that is immediately reflected in the user count of the Parallels's product license. ## Test SSO |
active-directory | Prosci Portal Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/prosci-portal-tutorial.md | + + Title: Microsoft Entra SSO integration with Prosci Portal +description: Learn how to configure single sign-on between Microsoft Entra ID and Prosci Portal. ++++++++ Last updated : 09/25/2023+++++# Microsoft Entra SSO integration with Prosci Portal ++In this tutorial, you'll learn how to integrate Prosci Portal with Microsoft Entra ID. When you integrate Prosci Portal with Microsoft Entra ID, you can: ++* Control in Microsoft Entra ID who has access to Prosci Portal. +* Enable your users to be automatically signed-in to Prosci Portal with their Microsoft Entra accounts. +* Manage your accounts in one central location. ++## Prerequisites ++To integrate Microsoft Entra ID with Prosci Portal, you need: ++* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Prosci Portal single sign-on (SSO) enabled subscription. ++## Scenario description ++In this tutorial, you configure and test Microsoft Entra SSO in a test environment. ++* Prosci Portal supports **SP** initiated SSO. ++> [!NOTE] +> Identifier of this application is a fixed string value so only one instance can be configured in one tenant. ++## Adding Prosci Portal from the gallery ++To configure the integration of Prosci Portal into Microsoft Entra ID, you need to add Prosci Portal from the gallery to your list of managed SaaS apps. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**. +1. In the **Add from the gallery** section, type **Prosci Portal** in the search box. +1. Select **Prosci Portal** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) ++## Configure and test Microsoft Entra SSO for Prosci Portal ++Configure and test Microsoft Entra SSO with Prosci Portal using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in Prosci Portal. ++To configure and test Microsoft Entra SSO with Prosci Portal, perform the following steps: ++1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature. + 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon. + 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on. +1. **[Configure Prosci Portal SSO](#configure-prosci-portal-sso)** - to configure the single sign-on settings on application side. + 1. **[Create Prosci Portal test user](#create-prosci-portal-test-user)** - to have a counterpart of B.Simon in Prosci Portal that is linked to the Microsoft Entra ID representation of user. +1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ++## Configure Microsoft Entra SSO ++Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Prosci Portal** > **Single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. ++ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration") ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type one of the following values: ++ | **Environment**| **URL** | + ||| + | Production |`urn:auth0:prosci-prod:microsoft`| + | Staging |`urn:auth0:prosci-staging:microsoft`| ++ b. In the **Reply URL** textbox, type one of the following URLs: ++ | **Environment**| **URL** | + ||| + | Production | `https://id.prosci.com/login/callback?connection=microsoft` | + | Staging | `https://id-staging.prosci.com/login/callback?connection=microsoft` | ++ c. In the **Sign on URL** textbox, type one of the following URLs: + + | **Environment**| **URL** | + ||| + | Production | `https://id.prosci.com` | + | Staging | `https://id-staging.prosci.com` | ++ d. In the **Relay State** textbox, type one of the following URLs: ++ | **Environment**| **URL** | + ||| + | Production | `https://portal.prosci.com` | + | Staging | `https://portal-staging.prosci.com` | ++ e. In the **Logout Url** textbox, type one of the following URLs: ++ | **Environment**| **URL** | + ||| + | Production | `https://id.prosci.com/logout` | + | Staging | `https://id-staging.prosci.com/logout` | ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. ++ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate") ++1. On the **Set up Prosci Portal** section, copy the appropriate URL(s) based on your requirement. ++ ![Screenshot shows to Copy configuration URLs.](common/copy-configuration-urls.png "Metadata") ++### Create a Microsoft Entra ID test user ++In this section, you create a test user in the Microsoft Entra admin center called B.Simon. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator). +1. Browse to **Identity** > **Users** > **All users**. +1. Select **New user** > **Create new user**, at the top of the screen. +1. In the **User** properties, follow these steps: + 1. In the **Display name** field, enter `B.Simon`. + 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`. + 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box. + 1. Select **Review + create**. +1. Select **Create**. ++### Assign the Microsoft Entra ID test user ++In this section, you'll enable B.Simon to use Microsoft Entra single sign-on by granting access to Prosci Portal. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Prosci Portal**. +1. In the app's overview page, select **Users and groups**. +1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog. + 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. + 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. + 1. In the **Add Assignment** dialog, click the **Assign** button. ++## Configure Prosci Portal SSO ++To configure single sign-on on **Prosci Portal** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Microsoft Entra admin center to [Prosci Portal support team](mailto:support@prosci.com). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create Prosci Portal test user ++In this section, you create a user called B.Simon in Prosci Portal. Work with [Prosci Portal support team](mailto:support@prosci.com) to add the users in the Prosci Portal platform. Users must be created and activated before you use single sign-on. ++## Test SSO ++In this section, you test your Microsoft Entra single sign-on configuration with following options. + +* Click on **Test this application** in Microsoft Entra admin center. This will redirect to Prosci Portal Sign-on URL where you can initiate the login flow. + +* Go to Prosci Portal Sign-on URL directly and initiate the login flow from there. + +* You can use Microsoft My Apps. When you click the Prosci Portal tile in the My Apps, this will redirect to Prosci Portal Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Next Steps ++Once you configure Prosci Portal you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app). |
active-directory | Rolemapper Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rolemapper-tutorial.md | + + Title: Microsoft Entra SSO integration with RoleMapper +description: Learn how to configure single sign-on between Microsoft Entra ID and RoleMapper. ++++++++ Last updated : 09/25/2023+++++# Microsoft Entra SSO integration with RoleMapper ++In this tutorial, you learn how to integrate RoleMapper with Microsoft Entra ID. When you integrate RoleMapper with Microsoft Entra ID, you can: ++* Control in Microsoft Entra ID who has access to RoleMapper. +* Enable your users to be automatically signed-in to RoleMapper with their Microsoft Entra accounts. +* Manage your accounts in one central location. ++## Prerequisites ++To integrate Microsoft Entra ID with RoleMapper, you need: ++* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* RoleMapper single sign-on (SSO) enabled subscription. ++## Scenario description ++In this tutorial, you configure and test Microsoft Entra SSO in a test environment. ++* RoleMapper supports **SP and IDP** initiated SSO. +* RoleMapper supports **Just In Time** user provisioning. ++## Adding RoleMapper from the gallery ++To configure the integration of RoleMapper into Microsoft Entra ID, you need to add RoleMapper from the gallery to your list of managed SaaS apps. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**. +1. In the **Add from the gallery** section, type **RoleMapper** in the search box. +1. Select **RoleMapper** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) ++## Configure and test Microsoft Entra SSO for RoleMapper ++Configure and test Microsoft Entra SSO with RoleMapper using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in RoleMapper. ++To configure and test Microsoft Entra SSO with RoleMapper, perform the following steps: ++1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature. + 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon. + 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on. +1. **[Configure RoleMapper SSO](#configure-rolemapper-sso)** - to configure the single sign-on settings on application side. + 1. **[Create RoleMapper test user](#create-rolemapper-test-user)** - to have a counterpart of B.Simon in RoleMapper that is linked to the Microsoft Entra ID representation of user. +1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ++## Configure Microsoft Entra SSO ++Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **RoleMapper** > **Single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. ++ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration") ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type a value using the following pattern: + `api.role-mapper.com/sso/<CustomerName>` ++ b. In the **Reply URL** textbox, type a URL using the following pattern: + `https://api.role-mapper.com/sso/saml2/<CustomerName>` ++1. Perform the following step, if you wish to configure the application in **SP** initiated mode: ++ In the **Sign on URL** textbox, type a URL using the following pattern: + `https://api.role-mapper.com/sso/<CustomerName>` ++ > [!NOTE] + > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [RoleMapper support team](mailto:support@rolemapper.tech) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Microsoft Entra admin center. ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. ++ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate") ++1. On the **Set up RoleMapper** section, copy the appropriate URL(s) based on your requirement. ++ ![Screenshot shows to copy configuration URLs.](common/copy-configuration-urls.png "Metadata") ++### Create a Microsoft Entra ID test user ++In this section, you create a test user in the Microsoft Entra admin center called B.Simon. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator). +1. Browse to **Identity** > **Users** > **All users**. +1. Select **New user** > **Create new user**, at the top of the screen. +1. In the **User** properties, follow these steps: + 1. In the **Display name** field, enter `B.Simon`. + 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`. + 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box. + 1. Select **Review + create**. +1. Select **Create**. ++### Assign the Microsoft Entra ID test user ++In this section, you enable B.Simon to use Microsoft Entra single sign-on by granting access to RoleMapper. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **RoleMapper**. +1. In the app's overview page, select **Users and groups**. +1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog. + 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. + 1. If you're expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. + 1. In the **Add Assignment** dialog, click the **Assign** button. ++## Configure RoleMapper SSO ++To configure single sign-on on **RoleMapper** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Microsoft Entra admin center to [RoleMapper support team](mailto:support@rolemapper.tech). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create RoleMapper test user ++In this section, a user called Britta Simon is created in RoleMapper. RoleMapper supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in RoleMapper, a new one is created after authentication. ++## Test SSO ++In this section, you test your Microsoft Entra single sign-on configuration with following options. + +#### SP initiated: + +* Click on **Test this application** in Microsoft Entra admin center. This will redirect to RoleMapper Sign on URL where you can initiate the login flow. + +* Go to RoleMapper Sign-on URL directly and initiate the login flow from there. + +#### IDP initiated: + +* Click on **Test this application** in Microsoft Entra admin center and you should be automatically signed in to the RoleMapper for which you set up the SSO. + +You can also use Microsoft My Apps to test the application in any mode. When you click the RoleMapper tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the RoleMapper for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Next steps ++Once you configure RoleMapper you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app). |
active-directory | Serenity Connect Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/serenity-connect-tutorial.md | + + Title: Microsoft Entra SSO integration with Serenity Connect. +description: Learn how to configure single sign-on between Microsoft Entra ID and Serenity Connect. ++++++++ Last updated : 09/25/2023+++++# Microsoft Entra SSO integration with Serenity Connect ++In this tutorial, you learn how to integrate Serenity Connect with Microsoft Entra ID. When you integrate Serenity Connect with Microsoft Entra ID, you can: ++* Control in Microsoft Entra ID who has access to Serenity Connect. +* Enable your users to be automatically signed-in to Serenity Connect with their Microsoft Entra accounts. +* Manage your accounts in one central location. ++## Prerequisites ++To integrate Microsoft Entra ID with Serenity Connect, you need: ++* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Serenity Connect single sign-on (SSO) enabled subscription. ++## Scenario description ++In this tutorial, you configure and test Microsoft Entra SSO in a test environment. ++* Serenity Connect supports **SP** initiated SSO. ++## Adding Serenity Connect from the gallery ++To configure the integration of Serenity Connect into Microsoft Entra ID, you need to add Serenity Connect from the gallery to your list of managed SaaS apps. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**. +1. In the **Add from the gallery** section, type **Serenity Connect** in the search box. +1. Select **Serenity Connect** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) ++## Configure and test Microsoft Entra SSO for Serenity Connect ++Configure and test Microsoft Entra SSO with Serenity Connect using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in Serenity Connect. ++To configure and test Microsoft Entra SSO with Serenity Connect, perform the following steps: ++1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature. + 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon. + 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on. +1. **[Configure Serenity Connect SSO](#configure-serenity-connect-sso)** - to configure the single sign-on settings on application side. + 1. **[Create Serenity Connect test user](#create-serenity-connect-test-user)** - to have a counterpart of B.Simon in Serenity Connect that is linked to the Microsoft Entra ID representation of user. +1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ++## Configure Microsoft Entra SSO ++Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Serenity Connect** > **Single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. ++ [ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration") ](common/edit-urls.png#lightbox) ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type a value using the following pattern: + `urn:amazon:cognito:sp:us-east-2_<SerenityUniqueID>` ++ b. In the **Reply URL** textbox, type the URL: + `https://serenityconnect.auth.us-east-2.amazoncognito.com/saml2/idpresponse` ++ c. In the **Sign on URL** textbox, type the URL: + `https://app.serenityconnect.com/sso-sign-in` ++ > [!NOTE] + > This value is not real. Update this value with the actual Identifier. Contact [Serenity Connect support team](mailto:hello@serenityconnect.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Microsoft Entra admin center. ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. ++ [ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate") ](common/copy-metadataurl.png#lightbox) ++### Create a Microsoft Entra ID test user ++In this section, you'll create a test user in the Microsoft Entra admin center called B.Simon. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator). +1. Browse to **Identity** > **Users** > **All users**. +1. Select **New user** > **Create new user**, at the top of the screen. +1. In the **User** properties, follow these steps: + 1. In the **Display name** field, enter `B.Simon`. + 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`. + 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box. + 1. Select **Review + create**. +1. Select **Create**. ++### Assign the Microsoft Entra ID test user ++In this section, you enable B.Simon to use Microsoft Entra single sign-on by granting access to Serenity Connect. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Serenity Connect**. +1. In the app's overview page, select **Users and groups**. +1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog. + 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. + 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. + 1. In the **Add Assignment** dialog, click the **Assign** button. ++## Configure Serenity Connect SSO ++To configure single sign-on on **Serenity Connect** side, you need to send the **App Federation Metadata Url** to [Serenity Connect support team](mailto:hello@serenityconnect.com). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create Serenity Connect test user ++In this section, you create a user called B.Simon in Serenity Connect. Work with [Serenity Connect support team](mailto:hello@serenityconnect.com) to add the users in the Serenity Connect platform. Users must be created and activated before you use single sign-on. ++## Test SSO ++In this section, you test your Microsoft Entra single sign-on configuration with following options. + +* Click on **Test this application** in Microsoft Entra admin center. This will redirect to Serenity Connect Sign-on URL where you can initiate the login flow. + +* Go to Serenity Connect Sign-on URL directly and initiate the login flow from there. + +* You can use Microsoft My Apps. When you click the Serenity Connect tile in the My Apps, this will redirect to Serenity Connect Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Next Steps ++Once you configure Serenity Connect you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app) |
active-directory | Sps Production Manager Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sps-production-manager-tutorial.md | + + Title: Microsoft Entra SSO integration with SPS|Production Manager. +description: Learn how to configure single sign-on between Microsoft Entra ID and SPS|Production Manager. ++++++++ Last updated : 09/25/2023+++++# Microsoft Entra SSO integration with SPS|Production Manager ++In this tutorial, you'll learn how to integrate SPS|Production Manager with Microsoft Entra ID. When you integrate SPS|Production Manager with Microsoft Entra ID, you can: ++* Control in Microsoft Entra ID who has access to SPS|Production Manager. +* Enable your users to be automatically signed-in to SPS|Production Manager with their Microsoft Entra accounts. +* Manage your accounts in one central location. ++## Prerequisites ++To integrate Microsoft Entra ID with SPS|Production Manager, you need: ++* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* SPS|Production Manager single sign-on (SSO) enabled subscription. ++## Scenario description ++In this tutorial, you configure and test Microsoft Entra SSO in a test environment. ++* SPS|Production Manager supports **IDP** initiated SSO. ++## Adding SPS|Production Manager from the gallery ++To configure the integration of SPS|Production Manager into Microsoft Entra ID, you need to add SPS|Production Manager from the gallery to your list of managed SaaS apps. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**. +1. In the **Add from the gallery** section, type **SPS|Production Manager** in the search box. +1. Select **SPS|Production Manager** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) ++## Configure and test Microsoft Entra SSO for SPS|Production Manager ++Configure and test Microsoft Entra SSO with SPS|Production Manager using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in SPS|Production Manager. ++To configure and test Microsoft Entra SSO with SPS|Production Manager, perform the following steps: ++1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature. + 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon. + 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on. +1. **[Configure SPS|Production Manager SSO](#configure-spsproduction-manager-sso)** - to configure the single sign-on settings on application side. + 1. **[Create SPS|Production Manager test user](#create-spsproduction-manager-test-user)** - to have a counterpart of B.Simon in SPS|Production Manager that is linked to the Microsoft Entra ID representation of user. +1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ++## Configure Microsoft Entra SSO ++Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **SPS|Production Manager** > **Single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. ++ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration") ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type one of the following URLs: ++ | Environment | URL | + |-|-| + | Production| `https://microsoft-v20.spsinc.net/microsoft-v20` | + | Staging | `https://microsoft-v20.spsinc.net/microsoft-staging1-v20` | ++ b. In the **Reply URL** textbox, type one of the following URLs: + + | Environment | URL | + |-|-| + | Production| `https://microsoft-v20.spsinc.net/microsoft-v20/saml-auth/AssertionConsumerService` | + | Staging | `https://microsoft-v20.spsinc.net/microsoft-staging1-v20/saml-auth/AssertionConsumerService` | ++1. In the **SAML Signing Certificate** section, click **Edit** button to open **SAML Signing Certificate** dialog. ++ ![Screenshot shows to Edit SAML Signing Certificate.](common/edit-certificate.png "Certificate") ++1. In the **SAML Signing Certificate** section, copy the **Thumbprint Value** and save it on your computer. ++ ![Screenshot shows to Copy Thumbprint value.](common/copy-thumbprint.png "Thumbprint") ++1. On the **Set up Insightsfirst** section, copy the appropriate URL(s) based on your requirement. ++ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata") ++### Create a Microsoft Entra ID test user ++In this section, you'll create a test user in the Microsoft Entra admin center called B.Simon. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator). +1. Browse to **Identity** > **Users** > **All users**. +1. Select **New user** > **Create new user**, at the top of the screen. +1. In the **User** properties, follow these steps: + 1. In the **Display name** field, enter `B.Simon`. + 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`. + 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box. + 1. Select **Review + create**. +1. Select **Create**. ++### Assign the Microsoft Entra ID test user ++In this section, you enable B.Simon to use Microsoft Entra single sign-on by granting access to SPS|Production Manager. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **SPS|Production Manager**. +1. In the app's overview page, select **Users and groups**. +1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog. + 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. + 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. + 1. In the **Add Assignment** dialog, click the **Assign** button. ++## Configure SPS|Production Manager SSO ++To configure single sign-on on **SPS|Production Manager** side, you need to send the **Thumbprint Value** and appropriate copied URLs from Microsoft Entra admin center to [SPS|Production Manager support team](mailto:support@spsinc.net). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create SPS|Production Manager test user ++In this section, you create a user called B.Simon in SPS|Production Manager. Work with [SPS|Production Manager support team](mailto:support@spsinc.net) to add the users in the SPS|Production Manager platform. Users must be created and activated before you use single sign-on. ++## Test SSO ++In this section, you test your Microsoft Entra single sign-on configuration with following options. + +* Click on Test this application in Microsoft Entra admin center and you should be automatically signed in to the SPS|Production Manager for which you set up the SSO. + +* You can use Microsoft My Apps. When you click the SPS|Production Manager tile in the My Apps, you should be automatically signed in to the SPS|Production Manager for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Next Steps ++Once you configure SPS|Production Manager you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app). |
active-directory | Workday Inbound Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workday-inbound-tutorial.md | To do this change, you must use [Workday Studio](https://community.workday.com/s 5. Select **Edit attribute list for Workday**. - ![Screenshot that shows the "Workday to Microsoft Entra user Provisioning - Provisioning" page with the "Edit attribute list for Workday" action highlighted.](./media/workday-inbound-tutorial/wdstudio_aad1.png) + ![Screenshot that shows the "Workday to Microsoft Entra user provisioning - Provisioning" page with the "Edit attribute list for Workday" action highlighted.](./media/workday-inbound-tutorial/wdstudio_aad1.png) 6. Scroll to the bottom of the attribute list to where the input fields are. |
active-directory | Fedramp Identification And Authentication Controls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/fedramp-identification-and-authentication-controls.md | Each row in the following table provides prescriptive guidance to help you devel | FedRAMP Control ID and description | Microsoft Entra guidance and recommendations | | - | - | | **IA-2 User Identification and Authentication**<br>The information system uniquely identifies and authenticates organizational users (or processes acting on behalf of organizational users). | **Uniquely identify and authenticate users or processes acting for users.**<p> Microsoft Entra ID uniquely identifies user and service principal objects directly. Microsoft Entra ID provides multiple authentication methods, and you can configure methods that adhere to National Institute of Standards and Technology (NIST) authentication assurance level (AAL) 3.<p>Identifiers <br> <li>Users: [Working with users in Microsoft Graph: ID property](/graph/api/resources/users)<br><li>Service principals: [ServicePrincipal resource type : ID property](/graph/api/resources/serviceprincipal)<p>Authentication and multifactor authentication<br> <li>[Achieving NIST authenticator assurance levels with the Microsoft identity platform](nist-overview.md) |-| **IA-2(1)**<br>The information system implements multifactor authentication for network access to privileged accounts.<br><br>**IA-2(3)**<br>The information system implements multifactor authentication for local access to privileged accounts. | **multifactor authentication for all access to privileged accounts.** <p>Configure the following elements for a complete solution to ensure all access to privileged accounts requires multifactor authentication.<p>Configure Conditional Access policies to require multifactor authentication for all users.<br> Implement Microsoft Entra Privileged Identity Management to require multifactor authentication for activation of privileged role assignment prior to use.<p>With Privileged Identity Management activation requirement, privilege account activation isn't possible without network access, so local access is never privileged.<p>multifactor authentication and Privileged Identity Management<br> <li>[Conditional Access: Require multifactor authentication for all users](../conditional-access/howto-conditional-access-policy-all-users-mfa.md)<br> <li>[Configure Microsoft Entra role settings in Privileged Identity Management](../privileged-identity-management/pim-how-to-change-default-settings.md?tabs=new) | +| **IA-2(1)**<br>The information system implements multifactor authentication for network access to privileged accounts.<br><br>**IA-2(3)**<br>The information system implements multifactor authentication for local access to privileged accounts. | **Multifactor authentication for all access to privileged accounts.** <p>Configure the following elements for a complete solution to ensure all access to privileged accounts requires multifactor authentication.<p>Configure Conditional Access policies to require multifactor authentication for all users.<br> Implement Microsoft Entra Privileged Identity Management to require multifactor authentication for activation of privileged role assignment prior to use.<p>With Privileged Identity Management activation requirement, privilege account activation isn't possible without network access, so local access is never privileged.<p>multifactor authentication and Privileged Identity Management<br> <li>[Conditional Access: Require multifactor authentication for all users](../conditional-access/howto-conditional-access-policy-all-users-mfa.md)<br> <li>[Configure Microsoft Entra role settings in Privileged Identity Management](../privileged-identity-management/pim-how-to-change-default-settings.md?tabs=new) | | **IA-2(2)**<br>The information system implements multifactor authentication for network access to non-privileged accounts.<br><br>**IA-2(4)**<br>The information system implements multifactor authentication for local access to nonprivileged accounts. | **Implement multifactor authentication for all access to nonprivileged accounts**<p>Configure the following elements as an overall solution to ensure all access to nonprivileged accounts requires MFA.<p> Configure Conditional Access policies to require MFA for all users.<br> Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to enforce use of specific authentication methods.<br> Configure Conditional Access policies to enforce device compliance.<p>Microsoft recommends using a multifactor cryptographic hardware authenticator (for example, FIDO2 security keys, Windows Hello for Business (with hardware TPM), or smart card) to achieve AAL3. If your organization is cloud-based, we recommend using FIDO2 security keys or Windows Hello for Business.<p>Windows Hello for Business hasn't been validated at the required FIPS 140 Security Level and as such federal customers would need to conduct risk assessment and evaluation before accepting it as AAL3. For more information regarding Windows Hello for Business FIPS 140 validation, see [Microsoft NIST AALs](nist-overview.md).<p>See the following guidance regarding MDM policies differ slightly based on authentication methods. <p>Smart Card / Windows Hello for Business<br> [Passwordless Strategy - Require Windows Hello for Business or smart card](/windows/security/identity-protection/hello-for-business/passwordless-strategy)<br> [Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br> [Conditional Access - Require MFA for all users](../conditional-access/howto-conditional-access-policy-all-users-mfa.md)<p> Hybrid Only<br> [Passwordless Strategy - Configure user accounts to disallow password authentication](/windows/security/identity-protection/hello-for-business/passwordless-strategy)<p> Smart Card Only<br>[Create a Rule to Send an Authentication Method Claim](/windows-server/identity/ad-fs/operations/create-a-rule-to-send-an-authentication-method-claim)<br>[Configure Authentication Policies](/windows-server/identity/ad-fs/operations/configure-authentication-policies)<p>FIDO2 Security Key<br> [Passwordless Strategy - Excluding the password credential provider](/windows/security/identity-protection/hello-for-business/passwordless-strategy)<br> [Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br> [Conditional Access - Require MFA for all users](../conditional-access/howto-conditional-access-policy-all-users-mfa.md)<p>Authentication Methods<br> [Microsoft Entra passwordless sign-in (preview) | FIDO2 security keys](../authentication/concept-authentication-passwordless.md)<br> [Passwordless security key sign-in Windows - Microsoft Entra ID](../authentication/howto-authentication-passwordless-security-key-windows.md)<br> [ADFS: Certificate Authentication with Microsoft Entra ID and Office 365](/archive/blogs/samueld/adfs-certauth-aad-o365)<br> [How Smart Card Sign-in Works in Windows (Windows 10)](/windows/security/identity-protection/smart-cards/smart-card-how-smart-card-sign-in-works-in-windows)<br> [Windows Hello for Business Overview (Windows 10)](/windows/security/identity-protection/hello-for-business/hello-overview)<p>Additional Resources:<br> [Policy CSP - Windows Client Management](/windows/client-management/mdm/policy-configuration-service-provider)<br>[Plan a passwordless authentication deployment with Microsoft Entra ID](../authentication/howto-authentication-passwordless-deployment.md)<br> | | **IA-2(5)**<br>The organization requires individuals to be authenticated with an individual authenticator when a group authenticator is employed. | **When multiple users have access to a shared or group account password, require each user to first authenticate by using an individual authenticator.**<p>Use an individual account per user. If a shared account is required, Microsoft Entra ID permits binding of multiple authenticators to an account so that each user has an individual authenticator. <p>Resources<br><li>[How it works: Microsoft Entra multifactor authentication](../authentication/concept-mfa-howitworks.md)<br> <li>[Manage authentication methods for Microsoft Entra multifactor authentication](../authentication/howto-mfa-userdevicesettings.md) | | **IA-2(8)**<br>The information system implements replay-resistant authentication mechanisms for network access to privileged accounts. | **Implement replay-resistant authentication mechanisms for network access to privileged accounts.**<p>Configure Conditional Access policies to require multifactor authentication for all users. All Microsoft Entra authentication methods at authentication assurance level 2 and 3 use either nonce or challenges and are resistant to replay attacks.<p>References<br> <li>[Conditional Access: Require multifactor authentication for all users](../conditional-access/howto-conditional-access-policy-all-users-mfa.md)<br> <li>[Achieving NIST authenticator assurance levels with the Microsoft identity platform](nist-overview.md) | |
active-directory | Plan Issuance Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/plan-issuance-solution.md | Each issuer has a single key set used for signing, updating, and recovery. This ### Microsoft Entra Verified ID service -![Diagram of Microsoft Microsoft Entra Verified ID service](media/plan-issuance-solution/plan-for-issuance-solution-verifiable-credentials-vc-services.png) +![Diagram of Microsoft Entra Verified ID service](media/plan-issuance-solution/plan-for-issuance-solution-verifiable-credentials-vc-services.png) The Microsoft Entra Verified ID service enables you to issue and revoke VCs based on your configuration. The service: |
ai-services | Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/authentication.md | Each request to an Azure AI service must include an authentication header. This * Authenticate with a [single-service](#authenticate-with-a-single-service-resource-key) or [multi-service](#authenticate-with-a-multi-service-resource-key) resource key * Authenticate with a [token](#authenticate-with-an-access-token)-* Authenticate with [Azure Active Directory (AAD)](#authenticate-with-an-access-token) +* Authenticate with [Azure Active Directory (AAD)](#authenticate-with-azure-active-directory) ## Prerequisites Let's quickly review the authentication headers available for use with Azure AI The first option is to authenticate a request with a resource key for a specific service, like Translator. The keys are available in the Azure portal for each resource that you've created. To use a resource key to authenticate a request, it must be passed along as the `Ocp-Apim-Subscription-Key` header. -These sample requests demonstrates how to use the `Ocp-Apim-Subscription-Key` header. Keep in mind, when using this sample you'll need to include a valid resource key. +These sample requests demonstrate how to use the `Ocp-Apim-Subscription-Key` header. Keep in mind, when using this sample you'll need to include a valid resource key. This is a sample call to the Translator service: ```cURL curl -X POST 'https://api.cognitive.microsofttranslator.com/translate?api-versio --data-raw '[{ "text": "How much for the cup of coffee?" }]' | json_pp ``` +## Authenticate with Azure Active Directory ++> [!IMPORTANT] +> Azure AD authentication always needs to be used together with custom subdomain name of your Azure resource. [Regional endpoints](./cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints) do not support Azure AD authentication. ++In the previous sections, we showed you how to authenticate against Azure AI services using a single-service or multi-service subscription key. While these keys provide a quick and easy path to start development, they fall short in more complex scenarios that require Azure [role-based access control (Azure RBAC)](../../articles/role-based-access-control/overview.md). Let's take a look at what's required to authenticate using Azure Active Directory (Azure AD). ++In the following sections, you'll use either the Azure Cloud Shell environment or the Azure CLI to create a subdomain, assign roles, and obtain a bearer token to call the Azure AI services. If you get stuck, links are provided in each section with all available options for each command in Azure Cloud Shell/Azure CLI. ++> [!IMPORTANT] +> If your organization is doing authentication through Azure AD, you should [disable local authentication](./disable-local-auth.md) (authentication with keys) so that users in the organization must always use Azure AD. ++### Create a resource with a custom subdomain ++The first step is to create a custom subdomain. If you want to use an existing Azure AI services resource which does not have custom subdomain name, follow the instructions in [Azure AI services custom subdomains](./cognitive-services-custom-subdomains.md#how-does-this-impact-existing-resources) to enable custom subdomain for your resource. ++1. Start by opening the Azure Cloud Shell. Then [select a subscription](/powershell/module/az.accounts/set-azcontext): ++ ```powershell-interactive + Set-AzContext -SubscriptionName <SubscriptionName> + ``` ++2. Next, [create an Azure AI services resource](/powershell/module/az.cognitiveservices/new-azcognitiveservicesaccount) with a custom subdomain. The subdomain name needs to be globally unique and cannot include special characters, such as: ".", "!", ",". ++ ```powershell-interactive + $account = New-AzCognitiveServicesAccount -ResourceGroupName <RESOURCE_GROUP_NAME> -name <ACCOUNT_NAME> -Type <ACCOUNT_TYPE> -SkuName <SUBSCRIPTION_TYPE> -Location <REGION> -CustomSubdomainName <UNIQUE_SUBDOMAIN> + ``` ++3. If successful, the **Endpoint** should show the subdomain name unique to your resource. +++### Assign a role to a service principal ++Now that you have a custom subdomain associated with your resource, you're going to need to assign a role to a service principal. ++> [!NOTE] +> Keep in mind that Azure role assignments may take up to five minutes to propagate. ++1. First, let's register an [Azure AD application](/powershell/module/Az.Resources/New-AzADApplication). ++ ```powershell-interactive + $SecureStringPassword = ConvertTo-SecureString -String <YOUR_PASSWORD> -AsPlainText -Force ++ $app = New-AzureADApplication -DisplayName <APP_DISPLAY_NAME> -IdentifierUris <APP_URIS> -PasswordCredentials $SecureStringPassword + ``` ++ You're going to need the **ApplicationId** in the next step. ++2. Next, you need to [create a service principal](/powershell/module/az.resources/new-azadserviceprincipal) for the Azure AD application. ++ ```powershell-interactive + New-AzADServicePrincipal -ApplicationId <APPLICATION_ID> + ``` ++ >[!NOTE] + > If you register an application in the Azure portal, this step is completed for you. ++3. The last step is to [assign the "Cognitive Services User" role](/powershell/module/az.Resources/New-azRoleAssignment) to the service principal (scoped to the resource). By assigning a role, you're granting service principal access to this resource. You can grant the same service principal access to multiple resources in your subscription. + >[!NOTE] + > The ObjectId of the service principal is used, not the ObjectId for the application. + > The ACCOUNT_ID will be the Azure resource Id of the Azure AI services account you created. You can find Azure resource Id from "properties" of the resource in Azure portal. ++ ```azurecli-interactive + New-AzRoleAssignment -ObjectId <SERVICE_PRINCIPAL_OBJECTID> -Scope <ACCOUNT_ID> -RoleDefinitionName "Cognitive Services User" + ``` ++### Sample request ++In this sample, a password is used to authenticate the service principal. The token provided is then used to call the Computer Vision API. ++1. Get your **TenantId**: + ```powershell-interactive + $context=Get-AzContext + $context.Tenant.Id + ``` ++2. Get a token: + > [!NOTE] + > If you're using Azure Cloud Shell, the `SecureClientSecret` class isn't available. ++ #### [PowerShell](#tab/powershell) + ```powershell-interactive + $authContext = New-Object "Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext" -ArgumentList "https://login.windows.net/<TENANT_ID>" + $secureSecretObject = New-Object "Microsoft.IdentityModel.Clients.ActiveDirectory.SecureClientSecret" -ArgumentList $SecureStringPassword + $clientCredential = New-Object "Microsoft.IdentityModel.Clients.ActiveDirectory.ClientCredential" -ArgumentList $app.ApplicationId, $secureSecretObject + $token=$authContext.AcquireTokenAsync("https://cognitiveservices.azure.com/", $clientCredential).Result + $token + ``` + + #### [Azure Cloud Shell](#tab/azure-cloud-shell) + ```Azure Cloud Shell + $authContext = New-Object "Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext" -ArgumentList "https://login.windows.net/<TENANT_ID>" + $clientCredential = New-Object "Microsoft.IdentityModel.Clients.ActiveDirectory.ClientCredential" -ArgumentList $app.ApplicationId, <YOUR_PASSWORD> + $token=$authContext.AcquireTokenAsync("https://cognitiveservices.azure.com/", $clientCredential).Result + $token + ``` ++ ++3. Call the Computer Vision API: + ```powershell-interactive + $url = $account.Endpoint+"vision/v1.0/models" + $result = Invoke-RestMethod -Uri $url -Method Get -Headers @{"Authorization"=$token.CreateAuthorizationHeader()} -Verbose + $result | ConvertTo-Json + ``` ++Alternatively, the service principal can be authenticated with a certificate. Besides service principal, user principal is also supported by having permissions delegated through another Azure AD application. In this case, instead of passwords or certificates, users would be prompted for two-factor authentication when acquiring token. ++## Authorize access to managed identities + +Azure AI services support Azure Active Directory (Azure AD) authentication with [managed identities for Azure resources](../../articles/active-directory/managed-identities-azure-resources/overview.md). Managed identities for Azure resources can authorize access to Azure AI services resources using Azure AD credentials from applications running in Azure virtual machines (VMs), function apps, virtual machine scale sets, and other services. By using managed identities for Azure resources together with Azure AD authentication, you can avoid storing credentials with your applications that run in the cloud. ++### Enable managed identities on a VM ++Before you can use managed identities for Azure resources to authorize access to Azure AI services resources from your VM, you must enable managed identities for Azure resources on the VM. To learn how to enable managed identities for Azure Resources, see: ++- [Azure portal](../../articles/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md) +- [Azure PowerShell](../../articles/active-directory/managed-identities-azure-resources/qs-configure-powershell-windows-vm.md) +- [Azure CLI](../../articles/active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md) +- [Azure Resource Manager template](../../articles/active-directory/managed-identities-azure-resources/qs-configure-template-windows-vm.md) +- [Azure Resource Manager client libraries](../../articles/active-directory/managed-identities-azure-resources/qs-configure-sdk-windows-vm.md) ++For more information about managed identities, see [Managed identities for Azure resources](../../articles/active-directory/managed-identities-azure-resources/overview.md). ## Use Azure key vault to securely access credentials |
ai-services | Disable Local Auth | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/disable-local-auth.md | + + Title: Disable local authentication in Azure AI Services ++description: "This article describes disabling local authentication in Azure AI Services." +++++ Last updated : 09/22/2023++++# Disable local authentication in Azure AI Services ++Azure AI Services provides Azure Active Directory (Azure AD) authentication support for all resources. This gives organizations control to disable local authentication methods and enforce Azure AD authentication. This feature provides you with seamless integration when you require centralized control and management of identities and resource credentials. ++You can disable local authentication using the Azure policy [Cognitive Services accounts should have local authentication methods disabled](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc). You can set it at the subscription level or resource group level to enforce the policy for a group of services. ++Disabling local authentication doesn't take effect immediately. Allow a few minutes for the service to block future authentication requests. ++You can use PowerShell to determine whether the local authentication policy is currently enabled. First sign in with the `Connect-AzAccount` command. Then use the cmdlet **[Get-AzCognitiveServicesAccount](/powershell/module/az.cognitiveservices/get-azcognitiveservicesaccount)** to retrieve your resource, and check the property `DisableLocalAuth`. A value of `true` means local authentication is disabled. +++## Re-enable local authentication ++To enable local authentication, execute the PowerShell cmdlet **[Set-AzCognitiveServicesAccount](/powershell/module/az.cognitiveservices/set-azcognitiveservicesaccount)** with the parameter `-DisableLocalAuth false`.  Allow a few minutes for the service to accept the change to allow local authentication requests. ++## Next steps +- [Authenticate requests to Azure AI services](./authentication.md) |
ai-services | Batch Synthesis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-synthesis.md | description: Learn how to use the batch synthesis API for asynchronous synthesis --+ Last updated 11/16/2022 |
ai-services | Batch Transcription Audio Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-audio-data.md | |
ai-services | Batch Transcription Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-create.md | |
ai-services | Batch Transcription Get | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-get.md | |
ai-services | Batch Transcription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription.md | |
ai-services | Call Center Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/call-center-overview.md | description: Azure AI services for Language and Speech can help you realize part --+ Last updated 09/18/2022 |
ai-services | Call Center Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/call-center-quickstart.md | description: In this quickstart, you perform sentiment analysis and conversation --+ Last updated 09/20/2022 |
ai-services | Call Center Telephony Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/call-center-telephony-integration.md | description: A common scenario for speech to text is transcribing large volumes --+ Last updated 08/10/2022 |
ai-services | Captioning Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/captioning-concepts.md | description: An overview of key concepts for captioning with speech to text. --+ Last updated 06/02/2022 |
ai-services | Captioning Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/captioning-quickstart.md | description: In this quickstart, you convert speech to text as captions. --+ Last updated 04/23/2022 |
ai-services | Custom Commands Encryption Of Data At Rest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/custom-commands-encryption-of-data-at-rest.md | description: Custom Commands encryption of data at rest. --+ Last updated 07/05/2020 |
ai-services | Custom Commands References | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/custom-commands-references.md | description: In this article, you learn about concepts and definitions for Custo --+ Last updated 06/18/2020 |
ai-services | Custom Commands | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/custom-commands.md | description: An overview of the features, capabilities, and restrictions for Cus --+ Last updated 03/11/2020 |
ai-services | Custom Keyword Basics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/custom-keyword-basics.md | description: When a user speaks the keyword, your device sends their dictation t --+ Last updated 11/12/2021 |
ai-services | Custom Neural Voice Lite | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/custom-neural-voice-lite.md | description: Use Custom Neural Voice Lite to demo and evaluate Custom Neural Voi --+ Last updated 10/27/2022 |
ai-services | Custom Neural Voice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/custom-neural-voice.md | description: Custom Neural Voice is a text to speech feature that allows you to --+ Last updated 03/27/2023 |
ai-services | Custom Speech Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/custom-speech-overview.md | description: Custom Speech is a set of online tools that allows you to evaluate --+ Last updated 09/15/2023 |
ai-services | Devices Sdk Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/devices-sdk-release-notes.md | description: The release notes provide a log of updates, enhancements, bug fixes --+ Last updated 02/12/2022 |
ai-services | Direct Line Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/direct-line-speech.md | description: An overview of the features, capabilities, and restrictions for Voi --+ Last updated 03/11/2020 |
ai-services | Display Text Format | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/display-text-format.md | description: An overview of key concepts for display text formatting with speech --+ Last updated 09/19/2022 |
ai-services | Embedded Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/embedded-speech.md | description: Embedded Speech is designed for on-device scenarios where cloud con --+ Last updated 10/31/2022 |
ai-services | Gaming Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/gaming-concepts.md | description: Concepts for game development with Azure AI Speech. --+ Last updated 01/25/2023 |
ai-services | Get Speech Recognition Results | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-speech-recognition-results.md | description: Learn how to get speech recognition results. --+ Last updated 06/13/2022 |
ai-services | Get Started Intent Recognition Clu | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-intent-recognition-clu.md | description: In this quickstart, you recognize intents from audio data with the --+ Last updated 02/22/2023 |
ai-services | Get Started Intent Recognition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-intent-recognition.md | description: In this quickstart, you recognize intents from audio data with the --+ Last updated 02/22/2023 |
ai-services | Get Started Speaker Recognition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-speaker-recognition.md | description: In this quickstart, you use speaker recognition to confirm who is s --+ Last updated 01/08/2022 |
ai-services | Get Started Speech To Text | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-speech-to-text.md | description: In this quickstart, learn how to convert speech to text with recogn --+ Last updated 08/24/2023 |
ai-services | Get Started Speech Translation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-speech-translation.md | description: In this quickstart, you translate speech from one language to text --+ Last updated 09/16/2022 |
ai-services | Get Started Stt Diarization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-stt-diarization.md | description: In this quickstart, you convert speech to text continuously from a --+ Last updated 7/27/2023 |
ai-services | Get Started Text To Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-text-to-speech.md | description: In this quickstart, you convert text to speech. Learn about object --+ Last updated 08/25/2023 |
ai-services | How To Async Meeting Transcription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-async-meeting-transcription.md | |
ai-services | How To Audio Content Creation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-audio-content-creation.md | description: Audio Content Creation is an online tool that allows you to run Tex --+ Last updated 09/25/2022 |
ai-services | How To Configure Azure Ad Auth | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-configure-azure-ad-auth.md | description: Learn how to authenticate using Azure Active Directory Authenticati --+ Last updated 06/18/2021 |
ai-services | How To Configure Openssl Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-configure-openssl-linux.md | description: Learn how to configure OpenSSL for Linux. --+ Last updated 06/22/2022 |
ai-services | How To Configure Rhel Centos 7 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-configure-rhel-centos-7.md | description: Learn how to configure RHEL/CentOS 7 so that the Speech SDK can be --+ Last updated 04/01/2022 |
ai-services | How To Control Connections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-control-connections.md | description: Learn how to monitor for connection status and manually connect or --+ Last updated 04/12/2021 |
ai-services | How To Custom Commands Debug Build Time | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-commands-debug-build-time.md | description: In this article, you learn how to debug errors when authoring Custo --+ Last updated 06/18/2020 |
ai-services | How To Custom Commands Debug Runtime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-commands-debug-runtime.md | description: In this article, you learn how to debug runtime errors in a Custom --+ Last updated 06/18/2020 |
ai-services | How To Custom Commands Deploy Cicd | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-commands-deploy-cicd.md | description: In this article, you learn how to set up continuous deployment for --+ Last updated 06/18/2020 |
ai-services | How To Custom Commands Developer Flow Test | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-commands-developer-flow-test.md | description: In this article, you learn different approaches to testing a custom --+ Last updated 06/18/2020 |
ai-services | How To Custom Commands Send Activity To Client | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-commands-send-activity-to-client.md | description: In this article, you learn how to send activity from a Custom Comma --+ Last updated 06/18/2020 |
ai-services | How To Custom Commands Setup Speech Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-commands-setup-speech-sdk.md | description: how to make requests to a published Custom Commands application fro --+ Last updated 06/18/2020 |
ai-services | How To Custom Commands Setup Web Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-commands-setup-web-endpoints.md | description: set up web endpoints for Custom Commands --+ Last updated 06/18/2020 |
ai-services | How To Custom Commands Update Command From Client | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-commands-update-command-from-client.md | description: Learn how to update a command from a client application. --+ Last updated 10/20/2020 |
ai-services | How To Custom Commands Update Command From Web Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-commands-update-command-from-web-endpoint.md | description: Learn how to update the state of a command by using a call to a web --+ Last updated 10/20/2020 |
ai-services | How To Custom Speech Continuous Integration Continuous Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-continuous-integration-continuous-deployment.md | description: Apply DevOps with Custom Speech and CI/CD workflows. Implement an e --+ Last updated 05/08/2022 |
ai-services | How To Custom Speech Create Project | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-create-project.md | description: Learn about how to create a project for Custom Speech. --+ Last updated 11/29/2022 |
ai-services | How To Custom Speech Deploy Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-deploy-model.md | description: Learn how to deploy Custom Speech models. --+ Last updated 11/29/2022 |
ai-services | How To Custom Speech Evaluate Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-evaluate-data.md | description: In this article, you learn how to quantitatively measure and improv --+ Last updated 11/29/2022 |
ai-services | How To Custom Speech Human Labeled Transcriptions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-human-labeled-transcriptions.md | description: You use human-labeled transcriptions with your audio data to improv --+ Last updated 05/08/2022 |
ai-services | How To Custom Speech Inspect Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-inspect-data.md | description: Custom Speech lets you qualitatively inspect the recognition qualit --+ Last updated 11/29/2022 |
ai-services | How To Custom Speech Model And Endpoint Lifecycle | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-model-and-endpoint-lifecycle.md | description: Custom Speech provides base models for training and lets you create --+ Last updated 11/29/2022 |
ai-services | How To Custom Speech Test And Train | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-test-and-train.md | description: Learn about types of training and testing data for a Custom Speech --+ Last updated 10/24/2022 |
ai-services | How To Custom Speech Train Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-train-model.md | description: Learn how to train Custom Speech models. Training a speech to text --+ Last updated 09/15/2023 |
ai-services | How To Custom Speech Transcription Editor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-transcription-editor.md | description: The online transcription editor allows you to create or edit audio --+ Last updated 05/08/2022 |
ai-services | How To Custom Speech Upload Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-upload-data.md | description: Learn about how to upload data to test or train a Custom Speech mod --+ Last updated 11/29/2022 |
ai-services | How To Custom Voice Create Voice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-voice-create-voice.md | description: Learn how to train a custom neural voice through the Speech Studio --+ Last updated 08/25/2023 |
ai-services | How To Custom Voice Prepare Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-voice-prepare-data.md | description: "Learn how to provide studio recordings and the associated scripts --+ Last updated 10/27/2022 |
ai-services | How To Custom Voice Talent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-voice-talent.md | description: Create a voice talent profile with an audio file recorded by the vo --+ Last updated 10/27/2022 |
ai-services | How To Custom Voice Training Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-voice-training-data.md | description: "Learn about the data types that you can use to train a Custom Neur --+ Last updated 10/27/2022 |
ai-services | How To Custom Voice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-voice.md | description: Learn how to create a Custom Neural Voice project that contains dat --+ Last updated 10/27/2022 |
ai-services | How To Deploy And Use Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-deploy-and-use-endpoint.md | description: Learn about how to deploy and use a custom neural voice model. --+ Last updated 11/30/2022 |
ai-services | How To Develop Custom Commands Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-develop-custom-commands-application.md | |
ai-services | How To Lower Speech Synthesis Latency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-lower-speech-synthesis-latency.md | description: How to lower speech synthesis latency using Speech SDK, including s --+ Last updated 04/29/2021 |
ai-services | How To Migrate To Custom Neural Voice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-migrate-to-custom-neural-voice.md | description: This document helps users migrate from custom voice to custom neura --+ Last updated 11/12/2021 |
ai-services | How To Migrate To Prebuilt Neural Voice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-migrate-to-prebuilt-neural-voice.md | description: This document helps users migrate from prebuilt standard voice to p --+ Last updated 11/12/2021 |
ai-services | How To Pronunciation Assessment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-pronunciation-assessment.md | description: Learn about pronunciation assessment features that are currently pu --+ Last updated 06/05/2023 |
ai-services | How To Recognize Intents From Speech Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-recognize-intents-from-speech-csharp.md | description: In this guide, you learn how to recognize intents from speech using --+ Last updated 02/08/2022 |
ai-services | How To Recognize Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-recognize-speech.md | description: Learn how to convert speech to text, including object construction, --+ Last updated 09/01/2023 |
ai-services | How To Select Audio Input Devices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-select-audio-input-devices.md | description: 'Learn about selecting audio input devices in the Speech SDK (C++, --+ Last updated 07/05/2019 |
ai-services | How To Speech Synthesis Viseme | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-speech-synthesis-viseme.md | description: Speech SDK supports viseme events during speech synthesis, which re --+ Last updated 10/23/2022 |
ai-services | How To Speech Synthesis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-speech-synthesis.md | |
ai-services | How To Track Speech Sdk Memory Usage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-track-speech-sdk-memory-usage.md | description: The Speech SDK supports numerous programming languages for speech t --+ Last updated 12/10/2019 |
ai-services | How To Translate Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-translate-speech.md | description: Learn how to translate speech from one language to text in another --+ Last updated 06/08/2022 |
ai-services | How To Use Audio Input Streams | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-use-audio-input-streams.md | description: An overview of the capabilities of the Speech SDK audio input strea --+ Last updated 05/09/2023 |
ai-services | How To Use Codec Compressed Audio Input Streams | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-use-codec-compressed-audio-input-streams.md | |
ai-services | How To Use Custom Entity Pattern Matching | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-use-custom-entity-pattern-matching.md | description: In this guide, you learn how to recognize intents and custom entiti --+ Last updated 11/15/2021 |
ai-services | How To Use Logging | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-use-logging.md | |
ai-services | How To Use Meeting Transcription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-use-meeting-transcription.md | description: In this quickstart, learn how to transcribe meetings. You can add, --+ Last updated 05/06/2023 |
ai-services | How To Use Simple Language Pattern Matching | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-use-simple-language-pattern-matching.md | description: In this guide, you learn how to recognize intents and entities from --+ Last updated 04/19/2022 |
ai-services | How To Windows Voice Assistants Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-windows-voice-assistants-get-started.md | description: The steps to begin developing a windows voice agent, including a re --+ Last updated 04/15/2020 |
ai-services | Improve Accuracy Phrase List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/improve-accuracy-phrase-list.md | Title: Improve recognition accuracy with phrase list description: Phrase lists can be used to customize speech recognition results based on context. --+ Last updated 09/01/2022 |
ai-services | Ingestion Client | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/ingestion-client.md | description: In this article we describe a tool released on GitHub that enables --+ Last updated 08/29/2022 |
ai-services | Intent Recognition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/intent-recognition.md | |
ai-services | Language Identification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/language-identification.md | description: Language identification is used to determine the language being spo --+ Last updated 9/19/2023 |
ai-services | Language Learning Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/language-learning-overview.md | description: Azure AI services for Speech can be used to learn languages. --+ Last updated 02/23/2023 |
ai-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/language-support.md | description: The Speech service supports numerous languages for speech to text a --+ Last updated 01/12/2023 |
ai-services | Meeting Transcription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/meeting-transcription.md | description: You use the meeting transcription feature for meetings. It combines --+ Last updated 05/06/2023 |
ai-services | Migrate To Batch Synthesis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/migrate-to-batch-synthesis.md | description: This document helps developers migrate code from Long Audio REST AP --+ Last updated 09/01/2022 |
ai-services | Migrate V2 To V3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/migrate-v2-to-v3.md | description: This document helps developers migrate code from v2 to v3 of the Sp --+ Last updated 09/15/2023 |
ai-services | Migrate V3 0 To V3 1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/migrate-v3-0-to-v3-1.md | description: This document helps developers migrate code from v3.0 to v3.1 of th --+ Last updated 09/15/2023 |
ai-services | Migration Overview Neural Voice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/migration-overview-neural-voice.md | description: This document summarizes the benefits of migration from non-neural --+ Last updated 11/12/2021 |
ai-services | Multi Device Conversation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/multi-device-conversation.md | description: Multi-device conversation makes it easy to create a speech or text --+ Last updated 02/19/2022 |
ai-services | Openai Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/openai-speech.md | description: In this how-to guide, you can use Speech to converse with Azure Ope --+ Last updated 04/15/2023 |
ai-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/overview.md | description: The Speech service provides speech to text, text to speech, and spe --+ Last updated 09/16/2022 |
ai-services | Pattern Matching Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/pattern-matching-overview.md | description: Pattern Matching with the IntentRecognizer helps you get started qu --+ Last updated 11/15/2021 keywords: intent recognition pattern matching |
ai-services | Power Automate Batch Transcription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/power-automate-batch-transcription.md | |
ai-services | Pronunciation Assessment Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/pronunciation-assessment-tool.md | description: The pronunciation assessment tool in Speech Studio gives you feedba --+ Last updated 09/08/2022 |
ai-services | Quickstart Custom Commands Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/quickstart-custom-commands-application.md | description: In this quickstart, you create and test a basic Custom Commands app --+ Last updated 02/19/2022 |
ai-services | Multi Device Conversation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/quickstarts/multi-device-conversation.md | description: In this quickstart, you'll learn how to create and join clients to --+ Last updated 06/25/2020 |
ai-services | Setup Platform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/quickstarts/setup-platform.md | description: In this quickstart, you learn how to install the Speech SDK for you --+ Last updated 09/05/2023 |
ai-services | Voice Assistants | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/quickstarts/voice-assistants.md | description: In this quickstart, you use the Speech SDK to create a custom voice --+ Last updated 06/25/2020 |
ai-services | Record Custom Voice Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/record-custom-voice-samples.md | description: Make a production-quality custom voice by preparing a robust script --+ Last updated 10/14/2022 |
ai-services | Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/regions.md | description: A list of available regions and endpoints for the Speech service, i --+ Last updated 09/16/2022 |
ai-services | Releasenotes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/releasenotes.md | |
ai-services | Resiliency And Recovery Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/resiliency-and-recovery-plan.md | |
ai-services | Rest Speech To Text Short | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/rest-speech-to-text-short.md | description: Learn how to use Speech to text REST API for short audio to convert --+ Last updated 05/02/2023 |
ai-services | Rest Speech To Text | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/rest-speech-to-text.md | description: Get reference documentation for Speech to text REST API. --+ Last updated 09/15/2023 |
ai-services | Rest Text To Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/rest-text-to-speech.md | description: Learn how to use the REST API to convert text into synthesized spee --+ Last updated 01/24/2022 |
ai-services | Role Based Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/role-based-access-control.md | description: Learn how to assign access roles for a Speech resource. --+ Last updated 04/03/2022 |
ai-services | Speaker Recognition Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speaker-recognition-overview.md | description: Speaker recognition provides algorithms that verify and identify sp --+ Last updated 01/08/2022 |
ai-services | Speech Container Batch Processing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-batch-processing.md | description: Use the Batch processing kit to scale Speech container requests. --+ Last updated 10/22/2020 |
ai-services | Speech Container Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-configuration.md | description: Speech service provides each container with a common configuration --+ Last updated 04/18/2023 |
ai-services | Speech Container Cstt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-cstt.md | description: Install and run custom speech to text containers with Docker to per --+ Last updated 08/29/2023 |
ai-services | Speech Container Howto On Premises | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-howto-on-premises.md | description: Using Kubernetes and Helm to define the speech to text and text to --+ Last updated 07/22/2021 |
ai-services | Speech Container Howto | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-howto.md | description: Use the Speech containers with Docker to perform speech recognition --+ Last updated 04/18/2023 |
ai-services | Speech Container Lid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-lid.md | description: Install and run language identification containers with Docker to p --+ Last updated 08/28/2023 |
ai-services | Speech Container Ntts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-ntts.md | description: Install and run neural text to speech containers with Docker to per --+ Last updated 08/28/2023 |
ai-services | Speech Container Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-overview.md | description: Use the Docker containers for the Speech service to perform speech --+ Last updated 09/11/2023 |
ai-services | Speech Container Stt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-stt.md | description: Install and run speech to text containers with Docker to perform sp --+ Last updated 08/28/2023 |
ai-services | Speech Devices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-devices.md | description: Get started with the Speech devices. The Speech service works with --+ Last updated 12/27/2021 |
ai-services | Speech Encryption Of Data At Rest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-encryption-of-data-at-rest.md | |
ai-services | Speech Sdk Microphone | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-sdk-microphone.md | description: Speech SDK microphone array recommendations. These array geometries --+ Last updated 12/27/2021 |
ai-services | Speech Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-sdk.md | description: The Speech software development kit (SDK) exposes many of the Speec --+ Last updated 09/16/2022 |
ai-services | Speech Ssml Phonetic Sets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-ssml-phonetic-sets.md | description: This article presents Speech service phonetic alphabet and Internat --+ Last updated 09/16/2022 |
ai-services | Speech Studio Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-studio-overview.md | description: Speech Studio is a set of UI-based tools for building and integrati --+ Last updated 09/25/2022 |
ai-services | Speech Synthesis Markup Pronunciation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-synthesis-markup-pronunciation.md | description: Learn about Speech Synthesis Markup Language (SSML) elements to imp --+ Last updated 11/30/2022 |
ai-services | Speech Synthesis Markup Structure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-synthesis-markup-structure.md | description: Learn about the Speech Synthesis Markup Language (SSML) document st --+ Last updated 11/30/2022 |
ai-services | Speech Synthesis Markup Voice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-synthesis-markup-voice.md | description: Learn about how you can use Speech Synthesis Markup Language (SSML) --+ Last updated 8/24/2023 |
ai-services | Speech Synthesis Markup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-synthesis-markup.md | description: Learn how to use the Speech Synthesis Markup Language to control pr --+ Last updated 8/16/2023 |
ai-services | Speech To Text | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-to-text.md | description: Get an overview of the benefits and capabilities of the speech to t --+ Last updated 04/05/2023 |
ai-services | Speech Translation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-translation.md | description: With speech translation, you can add end-to-end, real-time, multi-l --+ Last updated 09/16/2022 |
ai-services | Spx Basics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/spx-basics.md | description: In this Azure AI Speech CLI quickstart, you interact with speech to --+ Last updated 09/16/2022 |
ai-services | Spx Batch Operations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/spx-batch-operations.md | description: Learn how to do batch speech to text (speech recognition), batch te --+ Last updated 09/16/2022 |
ai-services | Spx Data Store Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/spx-data-store-configuration.md | |
ai-services | Spx Output Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/spx-output-options.md | |
ai-services | Spx Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/spx-overview.md | description: In this article, you learn about the Speech CLI, a command-line too --+ Last updated 09/16/2022 |
ai-services | Swagger Documentation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/swagger-documentation.md | |
ai-services | Text To Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech.md | description: Get an overview of the benefits and capabilities of the text to spe --+ Last updated 09/25/2022 |
ai-services | Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/troubleshooting.md | description: This article provides information to help you solve issues you migh --+ Last updated 12/08/2022 |
ai-services | Tutorial Voice Enable Your Bot Speech Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/tutorial-voice-enable-your-bot-speech-sdk.md | description: In this tutorial, you'll create an echo bot and configure a client --+ Last updated 01/24/2022 |
ai-services | Voice Assistants | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/voice-assistants.md | description: An overview of the features, capabilities, and restrictions for voi --+ Last updated 03/11/2020 |
ai-services | Windows Voice Assistants Automatic Enablement Guidelines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/windows-voice-assistants-automatic-enablement-guidelines.md | description: The instructions to enable voice activation for a voice agent by d --+ Last updated 04/15/2020 |
ai-services | Windows Voice Assistants Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/windows-voice-assistants-best-practices.md | description: Guidelines for best practices when designing a voice agent experien --+ Last updated 05/1/2020 |
ai-services | Windows Voice Assistants Implementation Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/windows-voice-assistants-implementation-guide.md | description: The instructions to implement voice activation and above-lock capab --+ Last updated 04/15/2020 |
ai-services | Windows Voice Assistants Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/windows-voice-assistants-overview.md | description: An overview of the voice assistants on Windows, including capabilit --+ Last updated 02/19/2022 |
aks | Cluster Autoscaler | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-autoscaler.md | Title: Use the cluster autoscaler in Azure Kubernetes Service (AKS) description: Learn how to use the cluster autoscaler to automatically scale your Azure Kubernetes Service (AKS) clusters to meet application demands. Previously updated : 07/14/2023 Last updated : 09/26/2023 # Automatically scale a cluster to meet application demands on Azure Kubernetes Service (AKS) To keep up with application demands in Azure Kubernetes Service (AKS), you may need to adjust the number of nodes that run your workloads. The cluster autoscaler component watches for pods in your cluster that can't be scheduled because of resource constraints. When the cluster autoscaler detects issues, it scales up the number of nodes in the node pool to meet the application demand. It also regularly checks nodes for a lack of running pods and scales down the number of nodes as needed. -This article shows you how to enable and manage the cluster autoscaler in an AKS cluster. +This article shows you how to enable and manage the cluster autoscaler in an AKS cluster, which is based on the open source [Kubernetes][kubernetes-cluster-autoscaler] version. ## Before you begin This article requires Azure CLI version 2.0.76 or later. Run `az --version` to f ## About the cluster autoscaler -To adjust to changing application demands, such as between workdays and evenings or weekends, clusters often need a way to automatically scale. AKS clusters can scale in one of two ways: +To adjust to changing application demands, such as between workdays and evenings or weekends, clusters often need a way to automatically scale. AKS clusters can scale in the following ways: -* The **cluster autoscaler** watches for pods that can't be scheduled on nodes because of resource constraints. The cluster then automatically increases the number of nodes. For more information, see [How does scale-up work?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-up-work) -* The **horizontal pod autoscaler** uses the Metrics Server in a Kubernetes cluster to monitor the resource demand of pods. If an application needs more resources, the number of pods is automatically increased to meet the demand. +* The **cluster autoscaler** periodically checks for pods that can't be scheduled on nodes because of resource constraints. The cluster then automatically increases the number of nodes. For more information, see [How does scale-up work?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-up-work). +* The **[Horizontal Pod Autoscaler][horizontal-pod-autoscaler]** uses the Metrics Server in a Kubernetes cluster to monitor the resource demand of pods. If an application needs more resources, the number of pods is automatically increased to meet the demand. +* **[Vertical Pod Autoscaler][vertical-pod-autoscaler]** (preview) automatically sets resource requests and limits on containers per workload based on past usage to ensure pods are scheduled onto nodes that have the required CPU and memory resources. :::image type="content" source="media/autoscaler/cluster-autoscaler.png" alt-text="Screenshot of how the cluster autoscaler and horizontal pod autoscaler often work together to support the required application demands."::: -Both the horizontal pod autoscaler and cluster autoscaler can decrease the number of pods and nodes as needed. The cluster autoscaler decreases the number of nodes when there has been unused capacity after a period of time. Any pods on a node removed by the cluster autoscaler are safely scheduled elsewhere in the cluster. +The Horizontal Pod Autoscaler scales the number of pod replicas as needed, and the cluster autoscaler scales the number of nodes in a node pool as needed. The cluster autoscaler decreases the number of nodes when there has been unused capacity after a period of time. Any pods on a node removed by the cluster autoscaler are safely scheduled elsewhere in the cluster. -With autoscaling enabled, when the node pool size is lower than the minimum or greater than the maximum it applies the scaling rules. Next, the autoscaler waits to take effect until a new node is needed in the node pool or until a node may be safely deleted from the current node pool. For more information, see [How does scale-down work?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-down-work) +While Vertical Pod Autoscaler or Horizontal Pod Autoscaler can be used to automatically adjust the number of Kubernetes pods in a workload, the number of nodes also needs to be able to scale to meet the computational needs of the pods. The cluster autoscaler addresses that need, handling scale up and scale down of Kubernetes nodes. It is common practice to enable cluster autoscaler for nodes, and either Vertical Pod Autoscaler or Horizontal Pod Autoscalers for pods. ++The cluster autoscaler and Horizontal Pod Autoscaler can work together and are often both deployed in a cluster. When combined, the Horizontal Pod Autoscaler runs the number of pods required to meet application demand, and the cluster autoscaler runs the number of nodes required to support the scheduled pods. ++> [!NOTE] +> Manual scaling is disabled when you use the cluster autoscaler. Let the cluster autoscaler determine the required number of nodes. If you want to manually scale your cluster, [disable the cluster autoscaler](#disable-the-cluster-autoscaler-on-a-cluster). ++With cluster autoscaler enabled, when the node pool size is lower than the minimum or greater than the maximum it applies the scaling rules. Next, the autoscaler waits to take effect until a new node is needed in the node pool or until a node may be safely deleted from the current node pool. For more information, see [How does scale-down work?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-down-work) The cluster autoscaler may be unable to scale down if pods can't move, such as in the following situations: The cluster autoscaler may be unable to scale down if pods can't move, such as i For more information, see [What types of pods can prevent the cluster autoscaler from removing a node?][autoscaler-scaledown] -The cluster autoscaler uses startup parameters for things like time intervals between scale events and resource thresholds. For more information on what parameters the cluster autoscaler uses, see [using the autoscaler profile](#use-the-cluster-autoscaler-profile). --The cluster autoscaler and horizontal pod autoscaler can work together and are often both deployed in a cluster. When combined, the horizontal pod autoscaler runs the number of pods required to meet application demand, and the cluster autoscaler runs the number of nodes required to support the scheduled pods. +## Use the cluster autoscaler on your AKS cluster -> [!NOTE] -> Manual scaling is disabled when you use the cluster autoscaler. Let the cluster autoscaler determine the required number of nodes. If you want to manually scale your cluster, [disable the cluster autoscaler](#disable-the-cluster-autoscaler-on-a-cluster). +In this section, you deploy, upgrade, disable, or re-enable the cluster autoscaler on your cluster. -## Use the cluster autoscaler on your AKS cluster +The cluster autoscaler uses startup parameters for things like time intervals between scale events and resource thresholds. For more information on what parameters the cluster autoscaler uses, see [using the autoscaler profile](#use-the-cluster-autoscaler-profile). ### Enable the cluster autoscaler on a new cluster To further help improve cluster resource utilization and free up CPU and memory [az-aks-update]: /cli/azure/aks#az-aks-update [az-aks-scale]: /cli/azure/aks#az-aks-scale [vertical-pod-autoscaler]: vertical-pod-autoscaler.md+[horizontal-pod-autoscaler]:concepts-scale.md#horizontal-pod-autoscaler [az-group-create]: /cli/azure/group#az_group_create <!-- LINKS - external --> To further help improve cluster resource utilization and free up CPU and memory [kubernetes-hpa]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/ [kubernetes-hpa-walkthrough]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/ [metrics-server]: https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#metrics-server+[kubernetes-cluster-autoscaler]: https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler |
aks | Configure Kubenet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet.md | With *Azure CNI*, each pod receives an IP address in the IP subnet and can commu * An additional hop is required in the design of kubenet, which adds minor latency to pod communication. * Route tables and user-defined routes are required for using kubenet, which adds complexity to operations.+ * For more information, see [Customize cluster egress with a user-defined routing table in AKS](./egress-udr.md) and [Customize cluster egress with outbound types in AKS](./egress-outboundtype.md). * Direct pod addressing isn't supported for kubenet due to kubenet design. * Unlike Azure CNI clusters, multiple kubenet clusters can't share a subnet. * AKS doesn't apply Network Security Groups (NSGs) to its subnet and doesn't modify any of the NSGs associated with that subnet. If you provide your own subnet and add NSGs associated with that subnet, you must ensure the security rules in the NSGs allow traffic between the node and pod CIDR. For more details, see [Network security groups][aks-network-nsg]. The following considerations help outline when each network model may be the mos * Most of the pod communication is within the cluster. * You don't need advanced AKS features, such as virtual nodes or Azure Network Policy. -***Use *Azure CNI* when**: +**Use *Azure CNI* when**: * You have available IP address space. * Most of the pod communication is to resources outside of the cluster. kubenet networking requires organized route table rules to successfully route re > [!NOTE] > When you create and use your own VNet and route table with the kubenet network plugin, you need to use a [user-assigned control plane identity][bring-your-own-control-plane-managed-identity]. For a system-assigned control plane identity, you can't retrieve the identity ID before creating a cluster, which causes a delay during role assignment. >-> Both system-assigned and user-assigned managed identities are supported when you create and use your own VNet and route table with the azure network plugin. We highly recommend using a user-assigned managed identity for BYO scenarios. +> Both system-assigned and user-assigned managed identities are supported when you create and use your own VNet and route table with the Azure network plugin. We highly recommend using a user-assigned managed identity for BYO scenarios. ### Add a route table with a user-assigned managed identity to your AKS cluster |
aks | Image Integrity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/image-integrity.md | + + Title: Use Image Integrity to validate signed images before deploying them to your Azure Kubernetes Service (AKS) clusters (Preview) +description: Learn how to use Image Integrity to validate signed images before deploying them to your Azure Kubernetes Service (AKS) clusters. ++++ Last updated : 09/26/2023+++# Use Image Integrity to validate signed images before deploying them to your Azure Kubernetes Service (AKS) clusters (Preview) ++Azure Kubernetes Service (AKS) and its underlying container model provide increased scalability and manageability for cloud native applications. With AKS, you can launch flexible software applications according to the runtime needs of your system. However, this flexibility can introduce new challenges. ++In these application environments, using signed container images helps verify that your deployments are built from a trusted entity and that images haven't been tampered with since their creation. Image Integrity is a service that allows you to add an Azure Policy built-in definition to verify that only signed images are deployed to your AKS clusters. ++> [!NOTE] +> Image Integrity is a feature based on [Ratify][ratify]. On an AKS cluster, the feature name and property name is `ImageIntegrity`, while the relevant Image Integrity pods' names contain `Ratify`. +++## Prerequisites ++* An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free). +* [Azure CLI][azure-cli-install] or [Azure PowerShell][azure-powershell-install]. +* `aks-preview` CLI extension version 0.5.96 or later. +* Ensure that the Azure Policy add-on for AKS is enabled on your cluster. If you don't have this add-on installed, see [Install Azure Policy add-on for AKS](../governance/policy/concepts/policy-for-kubernetes.md#install-azure-policy-add-on-for-aks). +* An AKS cluster enabled with OIDC Issuer. To create a new cluster or update an existing cluster, see [Configure an AKS cluster with OIDC Issuer](./use-oidc-issuer.md). +* The `EnableImageIntegrityPreview` and `AKS-AzurePolicyExternalData` feature flags registered on your Azure subscription. Register the feature flags using the following commands: + + 1. Register the `EnableImageIntegrityPreview` and `AKS-AzurePolicyExternalData` feature flags using the [`az feature register`][az-feature-register] command. ++ ```azurecli-interactive + # Register the EnableImageIntegrityPreview feature flag + az feature register --namespace "Microsoft.ContainerService" --name "EnableImageIntegrityPreview" ++ # Register the AKS-AzurePolicyExternalData feature flag + az feature register --namespace "Microsoft.ContainerService" --name "AKS-AzurePolicyExternalData" + ``` ++ It may take a few minutes for the status to show as *Registered*. ++ 2. Verify the registration status using the [`az feature show`][az-feature-show] command. ++ ```azurecli-interactive + # Verify the EnableImageIntegrityPreview feature flag registration status + az feature show --namespace "Microsoft.ContainerService" --name "EnableImageIntegrityPreview" ++ # Verify the AKS-AzurePolicyExternalData feature flag registration status + az feature show --namespace "Microsoft.ContainerService" --name "AKS-AzurePolicyExternalData" + ``` ++ 3. Once the status shows *Registered*, refresh the registration of the `Microsoft.ContainerService` resource provider using the [`az provider register`][az-provider-register] command. ++ ```azurecli-interactive + az provider register --namespace Microsoft.ContainerService + ``` ++## Considerations and limitations ++* Your AKS clusters must run Kubernetes version 1.26 or above. +* You shouldn't use this feature for production Azure Container Registry (ACR) registries or workloads. +* Image Integrity supports a maximum of 200 unique signatures concurrently cluster-wide. +* Notation is the only supported verifier. +* Audit is the only supported verification policy effect. ++## How Image Integrity works +++Image Integrity uses Ratify, Azure Policy, and Gatekeeper to validate signed images before deploying them to your AKS clusters. Enabling Image Integrity on your cluster deploys a `Ratify` pod. This `Ratify` pod performs the following tasks: ++1. Reconciles certificates from Azure Key Vault per the configuration you set up through `Ratify` CRDs. +2. Accesses images stored in ACR when validation requests come from [Azure Policy](../governance/policy/concepts/policy-for-kubernetes.md). To enable this experience, Azure Policy extends Gatekeeper, an admission controller webhook for [Open Policy Agent (OPA)](https://www.openpolicyagent.org/). +3. Determines whether the target image is signed with a trusted cert and therefore considered as *trusted*. +4. `AzurePolicy` and `Gatekeeper` consume the validation results as the compliance state to decide whether to allow the deployment request. ++## Enable Image Integrity on your AKS cluster ++> [!NOTE] +> Image signature verification is a governance-oriented scenario and leverages [Azure Policy](../governance/policy/concepts/policy-for-kubernetes.md) to verify image signatures on AKS clusters at-scale. We recommend using AKS's Image Integrity built-in Azure Policy initiative, which is available in [Azure Policy's built-in definition library](../governance/policy/samples/built-in-policies.md#kubernetes). ++### [Azure CLI](#tab/azure-cli) ++* Create a policy assignment with the AKS policy initiative *`[Preview]: Use Image Integrity to ensure only trusted images are deployed`* using the [`az policy assignment create`][az-policy-assignment-create] command. ++ ```azurecli-interactive + export SCOPE="/subscriptions/${SUBSCRIPTION}/resourceGroups/${RESOURCE_GROUP}" + export LOCATION=$(az group show -n ${RESOURCE_GROUP} --query location -o tsv) ++ az policy assignment create --name 'deploy-trustedimages' --policy-set-definition 'af28bf8b-c669-4dd3-9137-1e68fdc61bd6' --display-name 'Audit deployment with unsigned container images' --scope ${SCOPE} --mi-system-assigned --role Contributor --identity-scope ${SCOPE} --location ${LOCATION} + ``` ++ The `Ratify` pod deploys after you enable the feature. ++> [!NOTE] +> The policy deploys the Image Integrity feature on your cluster when it detects any update operation on the cluster. If you want to enable the feature immediately, you need to create a policy remediation using the [`az policy remediation create`][az-policy-remediation-create] command. +> +> ```azurecli-interactive +> assignment_id=$(az policy assignment show -n 'deploy-trustedimages' --scope ${SCOPE} --query id -o tsv) +> az policy remediation create -a "$assignment_id" --definition-reference-id deployAKSImageIntegrity -n remediation -g ${RESOURCE_GROUP} +> ``` ++### [Azure portal](#tab/azure-portal) ++1. In the Azure portal, navigate to the Azure Policy service named **Policy**. +2. Select **Definitions**. +3. Under **Categories**, select **Kubernetes**. +4. Choose the policy you want to apply. In this case, select **[Preview]: Use Image Integrity to ensure only trusted images are deployed** > **Assign**. +5. Set the **Scope** to the resource group where your AKS cluster is located. +6. Select **Review + create** > **Create** to submit the policy assignment. ++++## Set up verification configurations ++For Image Integrity to properly verify the target signed image, you need to set up `Ratify` configurations through K8s [CRDs](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions) using `kubectl`. ++In this article, we use a self-signed CA cert from the official Ratify documentation to set up verification configurations. For more examples, see [Ratify CRDs](https://ratify.dev/docs/1.0/ratify-configuration). ++1. Create a `VerifyConfig` file named `verify-config.yaml` and copy in the following YAML: ++ ```YAML + apiVersion: config.ratify.deislabs.io/v1beta1 + kind: CertificateStore + metadata: + name: certstore-inline + spec: + provider: inline + parameters: + value: | + --BEGIN CERTIFICATE-- + MIIDQzCCAiugAwIBAgIUDxHQ9JxxmnrLWTA5rAtIZCzY8mMwDQYJKoZIhvcNAQEL + BQAwKTEPMA0GA1UECgwGUmF0aWZ5MRYwFAYDVQQDDA1SYXRpZnkgU2FtcGxlMB4X + DTIzMDYyOTA1MjgzMloXDTMzMDYyNjA1MjgzMlowKTEPMA0GA1UECgwGUmF0aWZ5 + MRYwFAYDVQQDDA1SYXRpZnkgU2FtcGxlMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A + MIIBCgKCAQEAshmsL2VM9ojhgTVUUuEsZro9jfI27VKZJ4naWSHJihmOki7IoZS8 + 3/3ATpkE1lGbduJ77M9UxQbEW1PnESB0bWtMQtjIbser3mFCn15yz4nBXiTIu/K4 + FYv6HVdc6/cds3jgfEFNw/8RVMBUGNUiSEWa1lV1zDM2v/8GekUr6SNvMyqtY8oo + ItwxfUvlhgMNlLgd96mVnnPVLmPkCmXFN9iBMhSce6sn6P9oDIB+pr1ZpE4F5bwa + gRBg2tWN3Tz9H/z2a51Xbn7hCT5OLBRlkorHJl2HKKRoXz1hBgR8xOL+zRySH9Qo + 3yx6WvluYDNfVbCREzKJf9fFiQeVe0EJOwIDAQABo2MwYTAdBgNVHQ4EFgQUKzci + EKCDwPBn4I1YZ+sDdnxEir4wHwYDVR0jBBgwFoAUKzciEKCDwPBn4I1YZ+sDdnxE + ir4wDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAgQwDQYJKoZIhvcNAQEL + BQADggEBAGh6duwc1MvV+PUYvIkDfgj158KtYX+bv4PmcV/aemQUoArqM1ECYFjt + BlBVmTRJA0lijU5I0oZje80zW7P8M8pra0BM6x3cPnh/oZGrsuMizd4h5b5TnwuJ + hRvKFFUVeHn9kORbyQwRQ5SpL8cRGyYp+T6ncEmo0jdIOM5dgfdhwHgb+i3TejcF + 90sUs65zovUjv1wa11SqOdu12cCj/MYp+H8j2lpaLL2t0cbFJlBY6DNJgxr5qync + cz8gbXrZmNbzC7W5QK5J7fcx6tlffOpt5cm427f9NiK2tira50HU7gC3HJkbiSTp + Xw10iXXMZzSbQ0/Hj2BF4B40WfAkgRg= + --END CERTIFICATE-- + + apiVersion: config.ratify.deislabs.io/v1beta1 + kind: Store + metadata: + name: store-oras + spec: + name: oras + + apiVersion: config.ratify.deislabs.io/v1beta1 + kind: Verifier + metadata: + name: verifier-notary-inline + spec: + name: notation + artifactTypes: application/vnd.cncf.notary.signature + parameters: + verificationCertStores: # certificates for validating signatures + certs: # name of the trustStore + - certstore-inline # name of the certificate store CRD to include in this trustStore + trustPolicyDoc: # policy language that indicates which identities are trusted to produce artifacts + version: "1.0" + trustPolicies: + - name: default + registryScopes: + - "*" + signatureVerification: + level: strict + trustStores: + - ca:certs + trustedIdentities: + - "*" + ``` ++2. Apply the `VerifyConfig` to your cluster using the `kubectl apply` command. ++ ```azurecli-interactive + kubectl apply -f verify-config.yaml + ``` ++## Deploy sample images to your AKS cluster ++* Deploy a signed image using the `kubectl run demo` command. ++ ```azurecli-interactive + kubectl run demo-signed --image=ghcr.io/deislabs/ratify/notary-image:signed + ``` ++ The following example output shows that Image Integrity allows the deployment: ++ ```output + ghcr.io/deislabs/ratify/notary-image:signed + pod/demo-signed created + ``` ++If you want to use your own images, see the [guidance for image signing](../container-registry/container-registry-tutorial-sign-build-push.md). ++## Disable Image Integrity ++* Disable Image Integrity on your cluster using the [`az aks update`][az-aks-update] command with the `--disable-image-integrity` flag. ++ ```azurecli-interactive + az aks update -g myResourceGroup -n MyManagedCluster --disable-image-integrity + ``` ++### Remove policy initiative ++* Remove the policy initiative using the [`az policy assignment delete`][az-policy-assignment-delete] command. ++ ```azurecli-interactive + az policy assignment delete --name 'deploy-trustedimages' + ``` ++## Next steps ++In this article, you learned how to use Image Integrity to validate signed images before deploying them to your Azure Kubernetes Service (AKS) clusters. If you want to learn how to sign your own containers, see [Build, sign, and verify container images using Notary and Azure Key Vault (Preview)](../container-registry/container-registry-tutorial-sign-build-push.md). ++<! Internal links -> +[az-feature-register]: /cli/azure/feature#az_feature_register +[az-feature-show]: /cli/azure/feature#az_feature_show +[az-provider-register]: /cli/azure/provider#az_provider_register +[az-policy-assignment-create]: /cli/azure/policy/assignment#az_policy_assignment_create +[az-aks-update]: /cli/azure/aks#az_aks_update +[azure-cli-install]: /cli/azure/install-azure-cli +[azure-powershell-install]: /powershell/azure/install-az-ps +[az-policy-assignment-delete]: /cli/azure/policy/assignment#az_policy_assignment_delete +[az-policy-remediation-create]: /cli/azure/policy/remediation#az_policy_remediation_create ++<! External links -> +[ratify]: https://github.com/deislabs/ratify +[image-integrity-policy]: https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcf426bb8-b320-4321-8545-1b784a5df3a4 |
aks | Internal Lb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/internal-lb.md | Last updated 02/22/2023 # Use an internal load balancer with Azure Kubernetes Service (AKS) -You can create and use an internal load balancer to restrict access to your applications in Azure Kubernetes Service (AKS). -An internal load balancer does not have a public IP and makes a Kubernetes service accessible only to applications that can reach the private IP. These applications can be within the same VNET or in another VNET through VNET peering. This article shows you how to create and use an internal load balancer with AKS. +You can create and use an internal load balancer to restrict access to your applications in Azure Kubernetes Service (AKS). An internal load balancer doesn't have a public IP and makes a Kubernetes service accessible only to applications that can reach the private IP. These applications can be within the same VNET or in another VNET through VNET peering. This article shows you how to create and use an internal load balancer with AKS. > [!NOTE] > Azure Load Balancer is available in two SKUs: *Basic* and *Standard*. The *Standard* SKU is used by default when you create an AKS cluster. When you create a *LoadBalancer* service type, you'll get the same load balancer type as when you provisioned the cluster. For more information, see [Azure Load Balancer SKU comparison][azure-lb-comparison]. ## Before you begin -This article assumes that you have an existing AKS cluster. If you need an AKS cluster, you can create one [using Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or [the Azure portal][aks-quickstart-portal]. --You also need the Azure CLI version 2.0.59 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. --If you want to use an existing subnet or resource group, the AKS cluster identity needs permission to manage network resources. For information, see [Use kubenet networking with your own IP address ranges in AKS][use-kubenet] or [Configure Azure CNI networking in AKS][advanced-networking]. If you're configuring your load balancer to use an [IP address in a different subnet][different-subnet], ensure the AKS cluster identity also has read access to that subnet. --For more information on permissions, see [Delegate AKS access to other Azure resources][aks-sp]. +* This article assumes that you have an existing AKS cluster. If you need an AKS cluster, you can create one using [Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or the [Azure portal][aks-quickstart-portal]. +* You need the Azure CLI version 2.0.59 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. +* If you want to use an existing subnet or resource group, the AKS cluster identity needs permission to manage network resources. For information, see [Use kubenet networking with your own IP address ranges in AKS][use-kubenet] or [Configure Azure CNI networking in AKS][advanced-networking]. If you're configuring your load balancer to use an [IP address in a different subnet][different-subnet], ensure the AKS cluster identity also has read access to that subnet. + * For more information on permissions, see [Delegate AKS access to other Azure resources][aks-sp]. ## Create an internal load balancer -To create an internal load balancer, create a service manifest named `internal-lb.yaml` with the service type *LoadBalancer* and the *azure-load-balancer-internal* annotation as shown in the following example: --```yaml -apiVersion: v1 -kind: Service -metadata: - name: internal-app - annotations: - service.beta.kubernetes.io/azure-load-balancer-internal: "true" -spec: - type: LoadBalancer - ports: - - port: 80 - selector: - app: internal-app -``` +1. Create a service manifest named `internal-lb.yaml` with the service type `LoadBalancer` and the `azure-load-balancer-internal` annotation. -Deploy the internal load balancer using [`kubectl apply`][kubectl-apply] and specify the name of your YAML manifest. + ```yaml + apiVersion: v1 + kind: Service + metadata: + name: internal-app + annotations: + service.beta.kubernetes.io/azure-load-balancer-internal: "true" + spec: + type: LoadBalancer + ports: + - port: 80 + selector: + app: internal-app + ``` -```console -kubectl apply -f internal-lb.yaml -``` +2. Deploy the internal load balancer using the [`kubectl apply`][kubectl-apply] command. This command creates an Azure load balancer in the node resource group connected to the same virtual network as your AKS cluster. -This command creates an Azure load balancer in the node resource group that's connected to the same virtual network as your AKS cluster. + ```azurecli-interactive + kubectl apply -f internal-lb.yaml + ``` -When you view the service details, the IP address of the internal load balancer is shown in the *EXTERNAL-IP* column. In this context, *External* refers to the external interface of the load balancer. It doesn't mean that it receives a public, external IP address. This IP address is dynamically assigned from the same subnet as the AKS cluster. +3. View the service details using the `kubectl get service` command. -It may take a minute or two for the IP address to change from *\<pending\>* to an actual internal IP address, as shown in the following example: + ```azurecli-interactive + kubectl get service internal-app + ``` -``` -kubectl get service internal-app + The IP address of the internal load balancer is shown in the `EXTERNAL-IP` column, as shown in the following example output. In this context, *External* refers to the external interface of the load balancer. It doesn't mean that it receives a public, external IP address. This IP address is dynamically assigned from the same subnet as the AKS cluster. -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -internal-app LoadBalancer 10.0.248.59 10.240.0.7 80:30555/TCP 2m -``` + ```output + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + internal-app LoadBalancer 10.0.248.59 10.240.0.7 80:30555/TCP 2m + ``` ## Specify an IP address You can use the [`az network vnet subnet list`][az-network-vnet-subnet-list] Azu For more information on subnets, see [Add a node pool with a unique subnet][unique-subnet]. -If you want to use a specific IP address with the load balancer, there are two ways: +If you want to use a specific IP address with the load balancer, you have two options: **set service annotations** or **add the *LoadBalancerIP* property to the load balancer YAML manifest**. > [!IMPORTANT] > Adding the *LoadBalancerIP* property to the load balancer YAML manifest is deprecating following [upstream Kubernetes](https://github.com/kubernetes/kubernetes/pull/107235). While current usage remains the same and existing services are expected to work without modification, we **highly recommend setting service annotations** instead. -* **Set service annotations**: Use `service.beta.kubernetes.io/azure-load-balancer-ipv4` for an IPv4 address and `service.beta.kubernetes.io/azure-load-balancer-ipv6` for an IPv6 address. +### [Set service annotations](#tab/set-service-annotations) ++1. Set service annotations using `service.beta.kubernetes.io/azure-load-balancer-ipv4` for an IPv4 address and `service.beta.kubernetes.io/azure-load-balancer-ipv6` for an IPv6 address. ```yaml apiVersion: v1 If you want to use a specific IP address with the load balancer, there are two w app: internal-app ``` -* **Add the *LoadBalancerIP* property to the load balancer YAML manifest**: Add the *Service.Spec.LoadBalancerIP* property to the load balancer YAML manifest. This field is deprecating following [upstream Kubernetes](https://github.com/kubernetes/kubernetes/pull/107235), and it can't support dual-stack. Current usage remains the same and existing services are expected to work without modification. +### [Add the *LoadBalancerIP* property to the load balancer YAML manifest](#tab/add-load-balancer-ip-property) ++1. Add the *Service.Spec.LoadBalancerIP* property to the load balancer YAML manifest. This field is deprecating following [upstream Kubernetes](https://github.com/kubernetes/kubernetes/pull/107235), and it can't support dual-stack. Current usage remains the same and existing services are expected to work without modification. ```yaml apiVersion: v1 If you want to use a specific IP address with the load balancer, there are two w app: internal-app ``` -When you view the service details, the IP address in the *EXTERNAL-IP* column should reflect your specified IP address. + -``` -kubectl get service internal-app +2. View the service details using the `kubectl get service` command. -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -internal-app LoadBalancer 10.0.184.168 10.240.0.25 80:30225/TCP 4m -``` + ```azurecli-interactive + kubectl get service internal-app + ``` ++ The IP address in the `EXTERNAL-IP` column should reflect your specified IP address, as shown in the following example output: ++ ```output + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + internal-app LoadBalancer 10.0.184.168 10.240.0.25 80:30225/TCP 4m + ``` For more information on configuring your load balancer in a different subnet, see [Specify a different subnet][different-subnet] For more information on configuring your load balancer in a different subnet, se ### Before you begin -You must have the following resources: --* Kubernetes version 1.22.x or later. -* An existing resource group with a VNet and subnet. This resource group is where you'll [create the private endpoint](#create-a-private-endpoint-to-the-private-link-service). If you don't have these resources, see [Create a virtual network and subnet][aks-vnet-subnet]. +* You need Kubernetes version 1.22.x or later. +* You need an existing resource group with a VNet and subnet. This resource group is where you [create the private endpoint](#create-a-private-endpoint-to-the-private-link-service). If you don't have these resources, see [Create a virtual network and subnet][aks-vnet-subnet]. ### Create a Private Link service connection -To attach an Azure Private Link service to an internal load balancer, create a service manifest named `internal-lb-pls.yaml` with the service type *LoadBalancer* and the *azure-load-balancer-internal* and *azure-pls-create* annotation as shown in the following example. For more options, refer to the [Azure Private Link Service Integration](https://kubernetes-sigs.github.io/cloud-provider-azure/topics/pls-integration/) design document. --```yaml -apiVersion: v1 -kind: Service -metadata: - name: internal-app - annotations: - service.beta.kubernetes.io/azure-load-balancer-internal: "true" - service.beta.kubernetes.io/azure-pls-create: "true" -spec: - type: LoadBalancer - ports: - - port: 80 - selector: - app: internal-app -``` --Deploy the internal load balancer using [`kubectl apply`][kubectl-apply] and specify the name of your YAML manifest. --```console -kubectl apply -f internal-lb-pls.yaml -``` --This command creates an Azure load balancer in the node resource group that's connected to the same virtual network as your AKS cluster. +1. Create a service manifest named `internal-lb-pls.yaml` with the service type `LoadBalancer` and the `azure-load-balancer-internal` and `azure-pls-create` annotations. For more options, refer to the [Azure Private Link Service Integration](https://kubernetes-sigs.github.io/cloud-provider-azure/topics/pls-integration/) design document. -When you view the service details, the IP address of the internal load balancer is shown in the *EXTERNAL-IP* column. In this context, *External* refers to the external interface of the load balancer. It doesn't mean that it receives a public, external IP address. + ```yaml + apiVersion: v1 + kind: Service + metadata: + name: internal-app + annotations: + service.beta.kubernetes.io/azure-load-balancer-internal: "true" + service.beta.kubernetes.io/azure-pls-create: "true" + spec: + type: LoadBalancer + ports: + - port: 80 + selector: + app: internal-app + ``` -It may take a minute or two for the IP address to change from *\<pending\>* to an actual internal IP address, as shown in the following example: +2. Deploy the internal load balancer using the [`kubectl apply`][kubectl-apply] command. This command creates an Azure load balancer in the node resource group connected to the same virtual network as your AKS cluster. It also creates a Private Link Service object that connects to the frontend IP configuration of the load balancer associated with the Kubernetes service. -``` -kubectl get service internal-app + ```azurecli-interactive + kubectl apply -f internal-lb-pls.yaml + ``` -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -internal-app LoadBalancer 10.125.17.53 10.125.0.66 80:30430/TCP 64m -``` +3. View the service details using the `kubectl get service` command. -A Private Link Service object is also created. This Private Link Service object connects to the frontend IP configuration of the load balancer associated with the Kubernetes service. You can get the details of the Private Link Service object with the following sample command: + ```azurecli-interactive + kubectl get service internal-app + ``` -```azurecli-interactive -# Create a variable for the resource group + The IP address of the internal load balancer is shown in the `EXTERNAL-IP` column, as shown in the following example output. In this context, *External* refers to the external interface of the load balancer. It doesn't mean that it receives a public, external IP address. -AKS_MC_RG=$(az aks show -g myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv) + ```output + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + internal-app LoadBalancer 10.125.17.53 10.125.0.66 80:30430/TCP 64m + ``` -# List the private link service +4. View the details of the Private Link Service object using the [`az network private-link-service list`][az-network-private-link-service-list] command. -az network private-link-service list -g $AKS_MC_RG --query "[].{Name:name,Alias:alias}" -o table + ```azurecli-interactive + # Create a variable for the node resource group + + AKS_MC_RG=$(az aks show -g myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv) + + # View the details of the Private Link Service object + + az network private-link-service list -g $AKS_MC_RG --query "[].{Name:name,Alias:alias}" -o table + ``` -Name Alias --pls-xyz pls-xyz.abc123-defg-4hij-56kl-789mnop.eastus2.azure.privatelinkservice + Your output should look similar to the following example output: -``` + ```output + Name Alias + -- - + pls-xyz pls-xyz.abc123-defg-4hij-56kl-789mnop.eastus2.azure.privatelinkservice + ``` ### Create a Private Endpoint to the Private Link service -A Private Endpoint allows you to privately connect to your Kubernetes service object via the Private Link Service you created. To do so, follow the sample commands. --```azurecli-interactive -# Create a variable for the private link service --AKS_PLS_ID=$(az network private-link-service list -g $AKS_MC_RG --query "[].id" -o tsv) --# Create the private endpoint --$ az network private-endpoint create \ - -g myOtherResourceGroup \ - --name myAKSServicePE \ - --vnet-name myOtherVNET \ - --subnet pe-subnet \ - --private-connection-resource-id $AKS_PLS_ID \ - --connection-name connectToMyK8sService -``` +A Private Endpoint allows you to privately connect to your Kubernetes service object via the Private Link Service you created. ++* Create the private endpoint using the [`az network private-endpoint create`][az-network-private-endpoint-create] command. ++ ```azurecli-interactive + # Create a variable for the private link service + + AKS_PLS_ID=$(az network private-link-service list -g $AKS_MC_RG --query "[].id" -o tsv) + + # Create the private endpoint + + $ az network private-endpoint create \ + -g myOtherResourceGroup \ + --name myAKSServicePE \ + --vnet-name myOtherVNET \ + --subnet pe-subnet \ + --private-connection-resource-id $AKS_PLS_ID \ + --connection-name connectToMyK8sService + ``` ## Use private networks For more information, see [configure your own virtual network subnets with Kuben You don't need to make any changes to the previous steps to deploy an internal load balancer that uses a private network in an AKS cluster. The load balancer is created in the same resource group as your AKS cluster, but it's instead connected to your private virtual network and subnet, as shown in the following example: -``` +```azurecli-interactive $ kubectl get service internal-app NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE internal-app LoadBalancer 10.1.15.188 10.0.0.35 80:31669/TCP 1m ``` > [!NOTE]-> -> You may need to assign a minimum of *Microsoft.Network/virtualNetworks/subnets/read* and *Microsoft.Network/virtualNetworks/subnets/join/action* permission to AKS MSI on the Azure Virtual Network resources. You can view the cluster identity with [az aks show][az-aks-show], such as `az aks show --resource-group myResourceGroup --name myAKSCluster --query "identity"`. To create a role assignment, use the [az role assignment create][az-role-assignment-create] command. +> You may need to assign a minimum of *Microsoft.Network/virtualNetworks/subnets/read* and *Microsoft.Network/virtualNetworks/subnets/join/action* permission to AKS MSI on the Azure Virtual Network resources. You can view the cluster identity with [az aks show][az-aks-show], such as `az aks show --resource-group myResourceGroup --name myAKSCluster --query "identity"`. To create a role assignment, use the [`az role assignment create`][az-role-assignment-create] command. ### Specify a different subnet -Add the *azure-load-balancer-internal-subnet* annotation to your service to specify a subnet for your load balancer. The subnet specified must be in the same virtual network as your AKS cluster. When deployed, the load balancer *EXTERNAL-IP* address is part of the specified subnet. --```yaml -apiVersion: v1 -kind: Service -metadata: - name: internal-app - annotations: - service.beta.kubernetes.io/azure-load-balancer-internal: "true" - service.beta.kubernetes.io/azure-load-balancer-internal-subnet: "apps-subnet" -spec: - type: LoadBalancer - ports: - - port: 80 - selector: - app: internal-app -``` +* Add the `azure-load-balancer-internal-subnet` annotation to your service to specify a subnet for your load balancer. The subnet specified must be in the same virtual network as your AKS cluster. When deployed, the load balancer `EXTERNAL-IP` address is part of the specified subnet. ++ ```yaml + apiVersion: v1 + kind: Service + metadata: + name: internal-app + annotations: + service.beta.kubernetes.io/azure-load-balancer-internal: "true" + service.beta.kubernetes.io/azure-load-balancer-internal-subnet: "apps-subnet" + spec: + type: LoadBalancer + ports: + - port: 80 + selector: + app: internal-app + ``` ## Delete the load balancer -The load balancer will be deleted when all of its services are deleted. +The load balancer is deleted when all of its services are deleted. As with any Kubernetes resource, you can directly delete a service, such as `kubectl delete service internal-app`, which also deletes the underlying Azure load balancer. To learn more about Kubernetes services, see the [Kubernetes services documentat [unique-subnet]: create-node-pools.md#add-a-node-pool-with-a-unique-subnet [az-network-vnet-subnet-list]: /cli/azure/network/vnet/subnet#az-network-vnet-subnet-list [get-azvirtualnetworksubnetconfig]: /powershell/module/az.network/get-azvirtualnetworksubnetconfig+[az-network-private-link-service-list]: /cli/azure/network/private-link-service#az_network_private_link_service_list +[az-network-private-endpoint-create]: /cli/azure/network/private-endpoint#az_network_private_endpoint_create |
aks | Keda Deploy Add On Arm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-deploy-add-on-arm.md | Title: Install the Kubernetes Event-driven Autoscaling (KEDA) add-on by using an ARM template + Title: Install the Kubernetes Event-driven Autoscaling (KEDA) add-on using an ARM template description: Use an ARM template to deploy the Kubernetes Event-driven Autoscaling (KEDA) add-on to Azure Kubernetes Service (AKS). Previously updated : 10/10/2022 Last updated : 09/26/2023 -# Install the Kubernetes Event-driven Autoscaling (KEDA) add-on by using ARM template +# Install the Kubernetes Event-driven Autoscaling (KEDA) add-on using an ARM template -This article shows you how to deploy the Kubernetes Event-driven Autoscaling (KEDA) add-on to Azure Kubernetes Service (AKS) by using an [ARM](../azure-resource-manager/templates/index.yml) template. +This article shows you how to deploy the Kubernetes Event-driven Autoscaling (KEDA) add-on to Azure Kubernetes Service (AKS) using an [ARM template](../azure-resource-manager/templates/index.yml). [!INCLUDE [Current version callout](./includes/ked)] -## Prerequisites +## Before you begin -- An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).-- [Azure CLI installed](/cli/azure/install-azure-cli).-- Firewall rules are configured to allow access to the Kubernetes API server. ([learn more][aks-firewall-requirements])+- You need an Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free). +- You need the [Azure CLI installed](/cli/azure/install-azure-cli). +- This article assumes you have an existing Azure resource group. If you don't have an existing resource group, you can create one using the [`az group create`][az-group-create] command. +- Ensure you have firewall rules configured to allow access to the Kubernetes API server. For more information, see [Outbound network and FQDN rules for Azure Kubernetes Service (AKS) clusters][aks-firewall-requirements]. +- [Install the `aks-preview` Azure CLI extension](#install-the-aks-preview-azure-cli-extension). +- [Register the `AKS-KedaPreview` feature flag](#register-the-aks-kedapreview-feature-flag). +- [Create an SSH key pair](#create-an-ssh-key-pair). -## Install the aks-preview Azure CLI extension +### Install the `aks-preview` Azure CLI extension [!INCLUDE [preview features callout](includes/preview/preview-callout.md)] -To install the aks-preview extension, run the following command: +1. Install the `aks-preview` extension using the [`az extension add`][az-extension-add] command. -```azurecli -az extension add --name aks-preview -``` + ```azurecli-interactive + az extension add --name aks-preview + ``` -Run the following command to update to the latest version of the extension released: +2. Update to the latest version of the `aks-preview` extension using the [`az extension update`][az-extension-update] command. -```azurecli -az extension update --name aks-preview -``` + ```azurecli-interactive + az extension update --name aks-preview + ``` -## Register the 'AKS-KedaPreview' feature flag +### Register the `AKS-KedaPreview` feature flag -Register the `AKS-KedaPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example: +1. Register the `AKS-KedaPreview` feature flag using the [`az feature register`][az-feature-register] command. -```azurecli-interactive -az feature register --namespace "Microsoft.ContainerService" --name "AKS-KedaPreview" -``` + ```azurecli-interactive + az feature register --namespace "Microsoft.ContainerService" --name "AKS-KedaPreview" + ``` -It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command: + It takes a few minutes for the status to show *Registered*. -```azurecli-interactive -az feature show --namespace "Microsoft.ContainerService" --name "AKS-KedaPreview" -``` +2. Verify the registration status using the [`az feature show`][az-feature-show] command. -When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command: + ```azurecli-interactive + az feature show --namespace "Microsoft.ContainerService" --name "AKS-KedaPreview" + ``` -```azurecli-interactive -az provider register --namespace Microsoft.ContainerService -``` +3. When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider using the [`az provider register`][az-provider-register] command. -## Install the KEDA add-on with Azure Resource Manager (ARM) templates + ```azurecli-interactive + az provider register --namespace Microsoft.ContainerService + ``` -The KEDA add-on can be enabled by deploying an AKS cluster with an Azure Resource Manager template and specifying the `workloadAutoScalerProfile` field: +### Create an SSH key pair -```json - "workloadAutoScalerProfile": { - "keda": { - "enabled": true - } - } -``` +1. Navigate to the [Azure Cloud Shell](https://shell.azure.com/). +2. Create an SSH key pair using the [`az sshkey create`][az-sshkey-create] command. -## Connect to your AKS cluster + ```azurecli-interactive + az sshkey create --name <sshkey-name> --resource-group <resource-group-name> + ``` ++## Enable the KEDA add-on with an ARM template ++1. Deploy the [ARM template for an AKS cluster](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.kubernetes%2Faks%2Fazuredeploy.json). +2. Select **Edit template**. +3. Enable the KEDA add-on by specifying the `workloadAutoScalerProfile` field in the ARM template, as shown in the following example: -To connect to the Kubernetes cluster from your local computer, you use [kubectl][kubectl], the Kubernetes command-line client. --If you use the Azure Cloud Shell, `kubectl` is already installed. You can also install it locally using the [az aks install-cli][] command: --```azurecli -az aks install-cli -``` --To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials][] command. The following example gets credentials for the AKS cluster named *MyAKSCluster* in the *MyResourceGroup*: --```azurecli -az aks get-credentials --resource-group MyResourceGroup --name MyAKSCluster -``` --## Example deployment --The following snippet is a sample deployment that creates a cluster with KEDA enabled with a single node pool comprised of three `DS2_v5` nodes. --```json -{ - "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", - "contentVersion": "1.0.0.0", - "resources": [ - { - "apiVersion": "2022-05-02-preview", - "dependsOn": [], - "type": "Microsoft.ContainerService/managedClusters", - "location": "westcentralus", - "name": "myAKSCluster", - "properties": { - "kubernetesVersion": "1.23.5", - "enableRBAC": true, - "dnsPrefix": "myAKSCluster", - "agentPoolProfiles": [ - { - "name": "agentpool", - "osDiskSizeGB": 200, - "count": 3, - "enableAutoScaling": false, - "vmSize": "Standard_D2S_v5", - "osType": "Linux", - "storageProfile": "ManagedDisks", - "type": "VirtualMachineScaleSets", - "mode": "System", - "maxPods": 110, - "availabilityZones": [], - "nodeTaints": [], - "enableNodePublicIP": false - } - ], - "networkProfile": { - "loadBalancerSku": "standard", - "networkPlugin": "kubenet" - }, - "workloadAutoScalerProfile": { - "keda": { - "enabled": true - } - } - }, - "identity": { - "type": "SystemAssigned" + ```json + "workloadAutoScalerProfile": { + "keda": { + "enabled": true } }- ] -} -``` + ``` -## Start scaling apps with KEDA +4. Select **Save**. +5. Update the required values for the ARM template: ++ - **Subscription**: Select the Azure subscription to use for the deployment. + - **Resource group**: Select the resource group to use for the deployment. + - **Region**: Select the region to use for the deployment. + - **Dns Prefix**: Enter a unique DNS name to use for the cluster. + - **Linux Admin Username**: Enter a username for the cluster. + - **SSH public key source**: Select **Use existing key stored in Azure**. + - **Store Keys**: Select the key pair you created earlier in the article. ++6. Select **Review + create** > **Create**. -Now that KEDA is installed, you can start autoscaling your apps with KEDA by using its custom resource definition has been defined (CRD). +## Connect to your AKS cluster ++To connect to the Kubernetes cluster from your local device, you use [kubectl][kubectl], the Kubernetes command-line client. ++If you use the Azure Cloud Shell, `kubectl` is already installed. You can also install it locally using the [`az aks install-cli`][az-aks-install-cli] command. ++- Configure `kubectl` to connect to your Kubernetes cluster using the [`az aks get-credentials`][az-aks-get-credentials] command. ++ ```azurecli-interactive + az aks get-credentials --resource-group <resource-group-name> --name <cluster-name> + ``` ++## Start scaling apps with KEDA -To learn more about KEDA CRDs, follow the official [KEDA documentation][keda-scalers] to define your scaler. +You can autoscale your apps with KEDA using custom resource definitions (CRDs). For more information, see the [KEDA documentation][keda-scalers]. -## Clean Up +## Remove resources -To remove the resource group, and all related resources, use the [Az PowerShell module group delete][az-group-delete] command: +- Remove the resource group and all related resources using the [`az group delete`][az-group-delete] command. -```azurecli -az group delete --name MyResourceGroup -``` + ```azurecli-interactive + az group delete --name <resource-group-name> + ``` ## Next steps This article showed you how to install the KEDA add-on on an AKS cluster, and then verify that it's installed and running. With the KEDA add-on installed on your cluster, you can [deploy a sample application][keda-sample] to start scaling apps. -You can troubleshoot KEDA add-on problems in [this article][keda-troubleshoot]. +For information on KEDA troubleshooting, see [Troubleshoot the Kubernetes Event-driven Autoscaling (KEDA) add-on][keda-troubleshoot]. <!-- LINKS - internal -->-[az-aks-create]: /cli/azure/aks#az-aks-create -[az aks install-cli]: /cli/azure/aks#az-aks-install-cli -[az aks get-credentials]: /cli/azure/aks#az-aks-get-credentials -[az aks update]: /cli/azure/aks#az-aks-update [az-group-delete]: /cli/azure/group#az-group-delete [keda-troubleshoot]: /troubleshoot/azure/azure-kubernetes/troubleshoot-kubernetes-event-driven-autoscaling-add-on?context=/azure/aks/context/aks-context [aks-firewall-requirements]: outbound-rules-control-egress.md#azure-global-required-network-rules [az-provider-register]: /cli/azure/provider#az-provider-register [az-feature-register]: /cli/azure/feature#az-feature-register [az-feature-show]: /cli/azure/feature#az-feature-show+[az-sshkey-create]: /cli/azure/ssh#az-sshkey-create +[az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials +[az-aks-install-cli]: /cli/azure/aks#az-aks-install-cli +[az-extension-add]: /cli/azure/extension#az-extension-add +[az-extension-update]: /cli/azure/extension#az-extension-update +[az-group-create]: /cli/azure/group#az-group-create <!-- LINKS - external --> [kubectl]: https://kubernetes.io/docs/reference/kubectl/-[keda]: https://keda.sh/ [keda-scalers]: https://keda.sh/docs/scalers/ [keda-sample]: https://github.com/kedacore/sample-dotnet-worker-servicebus-queue |
aks | Open Service Mesh Deploy Addon Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-deploy-addon-bicep.md | Title: Deploy the Open Service Mesh add-on by using Bicep + Title: Deploy the Open Service Mesh add-on using Bicep in Azure Kubernetes Service (AKS) description: Use a Bicep template to deploy the Open Service Mesh (OSM) add-on to Azure Kubernetes Service (AKS). Previously updated : 9/20/2021 Last updated : 09/25/2023 +ms.editor: schaffererin -# Deploy the Open Service Mesh add-on by using Bicep +# Deploy the Open Service Mesh add-on using Bicep in Azure Kubernetes Service (AKS) -This article shows you how to deploy the Open Service Mesh (OSM) add-on to Azure Kubernetes Service (AKS) by using a [Bicep](../azure-resource-manager/bicep/index.yml) template. +This article shows you how to deploy the Open Service Mesh (OSM) add-on to Azure Kubernetes Service (AKS) using a [Bicep](../azure-resource-manager/bicep/index.yml) template. > [!IMPORTANT] > Based on the version of Kubernetes your cluster is running, the OSM add-on installs a different version of OSM. This article shows you how to deploy the Open Service Mesh (OSM) add-on to Azure [Bicep](../azure-resource-manager/bicep/overview.md) is a domain-specific language that uses declarative syntax to deploy Azure resources. You can use Bicep in place of creating [Azure Resource Manager templates](../azure-resource-manager/templates/overview.md) to deploy your infrastructure-as-code Azure resources. -## Prerequisites +## Before you begin -- Azure CLI version 2.20.0 or later-- An SSH public key used for deploying AKS-- [Visual Studio Code](https://code.visualstudio.com/) with a Bash terminal-- The Visual Studio Code [Bicep extension](../azure-resource-manager/bicep/install.md)+Before you begin, make sure you have the following prerequisites in place: ++* The Azure CLI version 2.20.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). +* An SSH public key used for deploying AKS. For more information, see [Create SSH keys using the Azure CLI](../virtual-machines/ssh-keys-azure-cli.md). +* [Visual Studio Code](https://code.visualstudio.com/) with a Bash terminal. +* The Visual Studio Code [Bicep extension](../azure-resource-manager/bicep/install.md). ## Install the OSM add-on for a new AKS cluster by using Bicep -For deployment of a new AKS cluster, you enable the OSM add-on at cluster creation. The following instructions use a generic Bicep template that deploys an AKS cluster by using ephemeral disks and the [`kubenet`](./configure-kubenet.md) container network interface, and then enables the OSM add-on. For more advanced deployment scenarios, see [What is Bicep?](../azure-resource-manager/bicep/overview.md). +For deployment of a new AKS cluster, you enable the OSM add-on at cluster creation. The following instructions use a generic Bicep template that deploys an AKS cluster by using ephemeral disks and the [`kubenet`](./configure-kubenet.md) container network interface, and then enables the OSM add-on. For more advanced deployment scenarios, see [What is Bicep?](../azure-resource-manager/bicep/overview.md) ### Create a resource group -In Azure, you can associate related resources by using a resource group. Create a resource group by using [az group create](/cli/azure/group#az-group-create). The following example creates a resource group named *my-osm-bicep-aks-cluster-rg* in a specified Azure location (region): +* Create a resource group using the [`az group create`](/cli/azure/group#az-group-create) command. -```azurecli-interactive -az group create --name <my-osm-bicep-aks-cluster-rg> --location <azure-region> -``` + ```azurecli-interactive + az group create --name <my-osm-bicep-aks-cluster-rg> --location <azure-region> + ``` ### Create the main and parameters Bicep files -By using Visual Studio Code with a Bash terminal open, create a directory to store the necessary Bicep deployment files. The following example creates a directory named *bicep-osm-aks-addon* and changes to the directory: --```azurecli-interactive -mkdir bicep-osm-aks-addon -cd bicep-osm-aks-addon -``` --Next, create both the main file and the parameters file, as shown in the following example: --```azurecli-interactive -touch osm.aks.bicep && touch osm.aks.parameters.json -``` --Open the *osm.aks.bicep* file and copy the following example content to it. Then save the file. --```bicep -// https://learn.microsoft.com/azure/aks/troubleshooting#what-naming-restrictions-are-enforced-for-aks-resources-and-parameters -@minLength(3) -@maxLength(63) -@description('Provide a name for the AKS cluster. The only allowed characters are letters, numbers, dashes, and underscore. The first and last character must be a letter or a number.') -param clusterName string -@minLength(3) -@maxLength(54) -@description('Provide a name for the AKS dnsPrefix. Valid characters include alphanumeric values and hyphens (-). The dnsPrefix can\'t include special characters such as a period (.)') -param clusterDNSPrefix string -param k8Version string -param sshPubKey string ---resource aksCluster 'Microsoft.ContainerService/managedClusters@2021-03-01' = { - name: clusterName - location: resourceGroup().location - identity: { - type: 'SystemAssigned' - } - properties: { - kubernetesVersion: k8Version - dnsPrefix: clusterDNSPrefix - enableRBAC: true - agentPoolProfiles: [ - { - name: 'agentpool' - count: 3 - vmSize: 'Standard_DS2_v2' - osDiskSizeGB: 30 - osDiskType: 'Ephemeral' - osType: 'Linux' - mode: 'System' +1. Create a directory to store the necessary Bicep deployment files. The following example creates a directory named *bicep-osm-aks-addon* and changes to the directory: ++ ```azurecli-interactive + mkdir bicep-osm-aks-addon + cd bicep-osm-aks-addon + ``` ++2. Create the main file and the parameters file. ++ ```azurecli-interactive + touch osm.aks.bicep && touch osm.aks.parameters.json + ``` ++3. Open the *osm.aks.bicep* file and copy in the following content: ++ ```bicep + // https://learn.microsoft.com/azure/aks/troubleshooting#what-naming-restrictions-are-enforced-for-aks-resources-and-parameters + @minLength(3) + @maxLength(63) + @description('Provide a name for the AKS cluster. The only allowed characters are letters, numbers, dashes, and underscore. The first and last character must be a letter or a number.') + param clusterName string + @minLength(3) + @maxLength(54) + @description('Provide a name for the AKS dnsPrefix. Valid characters include alphanumeric values and hyphens (-). The dnsPrefix can\'t include special characters such as a period (.)') + param clusterDNSPrefix string + param k8Version string + param sshPubKey string + param location string + param adminUsername string + + + resource aksCluster 'Microsoft.ContainerService/managedClusters@2021-03-01' = { + name: clusterName + location: location + identity: { + type: 'SystemAssigned' }- ] - linuxProfile: { - adminUsername: 'adminUserName' - ssh: { - publicKeys: [ + properties: { + kubernetesVersion: k8Version + dnsPrefix: clusterDNSPrefix + enableRBAC: true + agentPoolProfiles: [ {- keyData: sshPubKey + name: 'agentpool' + count: 3 + vmSize: 'Standard_DS2_v2' + osDiskSizeGB: 30 + osDiskType: 'Ephemeral' + osType: 'Linux' + mode: 'System' } ]+ linuxProfile: { + adminUsername: adminUserName + ssh: { + publicKeys: [ + { + keyData: sshPubKey + } + ] + } + } + addonProfiles: { + openServiceMesh: { + enabled: true + config: {} + } + } } }- addonProfiles: { - openServiceMesh: { - enabled: true - config: {} + ``` ++4. Open the *osm.aks.parameters.json* file and copy in the following content. Make sure you replace the deployment parameter values with your own values. ++ > [!NOTE] + > The *osm.aks.parameters.json* file is an example template parameters file needed for the Bicep deployment. Update the parameters specifically for your deployment environment. The parameters you need to add values for include: `clusterName`, `clusterDNSPrefix`, `k8Version`, `sshPubKey`, `location`, and `adminUsername`. To find a list of supported Kubernetes versions in your region, use the `az aks get-versions --location <region>` command. ++ ```json + { + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "clusterName": { + "value": "<YOUR CLUSTER NAME HERE>" + }, + "clusterDNSPrefix": { + "value": "<YOUR CLUSTER DNS PREFIX HERE>" + }, + "k8Version": { + "value": "<YOUR SUPPORTED KUBERNETES VERSION HERE>" + }, + "sshPubKey": { + "value": "<YOUR SSH KEY HERE>" + }, + "location": { + "value": "<YOUR AZURE REGION HERE>" + }, + "adminUsername": { + "value": "<YOUR ADMIN USERNAME HERE>" + } } }- } -} -``` --Open the *osm.aks.parameters.json* file and copy the following example content to it. Add the deployment-specific parameters, and then save the file. --> [!NOTE] -> The *osm.aks.parameters.json* file is an example template parameters file needed for the Bicep deployment. Update the parameters specifically for your deployment environment. The specific parameter values in this example need the following parameters to be updated: `clusterName`, `clusterDNSPrefix`, `k8Version`, and `sshPubKey`. To find a list of supported Kubernetes versions in your region, use the `az aks get-versions --location <region>` command. --```json -{ - "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#", - "contentVersion": "1.0.0.0", - "parameters": { - "clusterName": { - "value": "<YOUR CLUSTER NAME HERE>" - }, - "clusterDNSPrefix": { - "value": "<YOUR CLUSTER DNS PREFIX HERE>" - }, - "k8Version": { - "value": "<YOUR SUPPORTED KUBERNETES VERSION HERE>" - }, - "sshPubKey": { - "value": "<YOUR SSH KEY HERE>" - } - } -} -``` + ``` ### Deploy the Bicep files -To deploy the previously created Bicep files, open the terminal and authenticate to your Azure account for the Azure CLI by using the `az login` command. After you're authenticated to your Azure subscription, run the following commands for deployment: --```azurecli-interactive -az group create --name osm-bicep-test --location eastus2 +1. Open a terminal and authenticate to your Azure account for the Azure CLI using the `az login` command. +2. Deploy the Bicep files using the [`az deployment group create`][az-deployment-group-create] command. -az deployment group create \ - --name OSMBicepDeployment \ - --resource-group osm-bicep-test \ - --template-file osm.aks.bicep \ - --parameters @osm.aks.parameters.json -``` --When the deployment finishes, you should see a message that says the deployment succeeded. + ```azurecli-interactive + az deployment group create \ + --name OSMBicepDeployment \ + --resource-group osm-bicep-test \ + --template-file osm.aks.bicep \ + --parameters @osm.aks.parameters.json + ``` ## Validate installation of the OSM add-on -You use several commands to check that all of the components of the OSM add-on are enabled and running. --First, query the add-on profiles of the cluster to check the enabled state of the installed add-ons. The following command should return `true`: +1. Query the add-on profiles of the cluster to check the enabled state of the installed add-ons. The following command should return `true`: -```azurecli-interactive -az aks list -g <my-osm-aks-cluster-rg> -o json | jq -r '.[].addonProfiles.openServiceMesh.enabled' -``` + ```azurecli-interactive + az aks list -g <my-osm-aks-cluster-rg> -o json | jq -r '.[].addonProfiles.openServiceMesh.enabled' + ``` -The following `kubectl` commands will report the status of *osm-controller*: +2. Get the status of the *osm-controller* using the following `kubectl` commands. -```azurecli-interactive -kubectl get deployments -n kube-system --selector app=osm-controller -kubectl get pods -n kube-system --selector app=osm-controller -kubectl get services -n kube-system --selector app=osm-controller -``` + ```azurecli-interactive + kubectl get deployments -n kube-system --selector app=osm-controller + kubectl get pods -n kube-system --selector app=osm-controller + kubectl get services -n kube-system --selector app=osm-controller + ``` ## Access the OSM add-on configuration -You can configure the OSM controller via the OSM MeshConfig resource, and you can view the OSM controller's configuration settings via the Azure CLI. Use the `kubectl get` command as shown in the following example: --```azurecli-interactive -kubectl get meshconfig osm-mesh-config -n kube-system -o yaml -``` --Here's an example output of MeshConfig: --```yaml -apiVersion: config.openservicemesh.io/v1alpha1 -kind: MeshConfig -metadata: - creationTimestamp: "0000-00-00A00:00:00A" - generation: 1 - name: osm-mesh-config - namespace: kube-system - resourceVersion: "2494" - uid: 6c4d67f3-c241-4aeb-bf4f-b029b08faa31 -spec: - certificate: - serviceCertValidityDuration: 24h - featureFlags: - enableEgressPolicy: true - enableMulticlusterMode: false - enableWASMStats: true - observability: - enableDebugServer: true - osmLogLevel: info - tracing: - address: jaeger.osm-system.svc.cluster.local - enable: false - endpoint: /api/v2/spans - port: 9411 - sidecar: - configResyncInterval: 0s - enablePrivilegedInitContainer: false - envoyImage: mcr.microsoft.com/oss/envoyproxy/envoy:v1.18.3 - initContainerImage: mcr.microsoft.com/oss/openservicemesh/init:v0.9.1 - logLevel: error - maxDataPlaneConnections: 0 - resources: {} - traffic: - enableEgress: true - enablePermissiveTrafficPolicyMode: true - inboundExternalAuthorization: - enable: false - failureModeAllow: false - statPrefix: inboundExtAuthz - timeout: 1s - useHTTPSIngress: false -``` --Notice that `enablePermissiveTrafficPolicyMode` is configured to `true`. In OSM, permissive traffic policy mode bypasses [SMI](https://smi-spec.io/) traffic policy enforcement. In this mode, OSM automatically discovers services that are a part of the service mesh. The discovered services will have traffic policy rules programmed on each Envoy proxy sidecar to allow communications between these services. --> [!WARNING] -> Before you proceed, verify that your permissive traffic policy mode is set to `true`. If it isn't, change it to `true` by using the following command: -> -> ```OSM Permissive Mode to True -> kubectl patch meshconfig osm-mesh-config -n kube-system -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":true}}}' --type=merge ->``` +You can configure the OSM controller using the OSM MeshConfig resource, and you can view the OSM controller's configuration settings using the Azure CLI. ++* View the OSM controller's configuration settings using the `kubectl get` command. ++ ```azurecli-interactive + kubectl get meshconfig osm-mesh-config -n kube-system -o yaml + ``` ++ Here's an example output of MeshConfig: ++ ```yaml + apiVersion: config.openservicemesh.io/v1alpha1 + kind: MeshConfig + metadata: + creationTimestamp: "0000-00-00A00:00:00A" + generation: 1 + name: osm-mesh-config + namespace: kube-system + resourceVersion: "2494" + uid: 6c4d67f3-c241-4aeb-bf4f-b029b08faa31 + spec: + certificate: + serviceCertValidityDuration: 24h + featureFlags: + enableEgressPolicy: true + enableMulticlusterMode: false + enableWASMStats: true + observability: + enableDebugServer: true + osmLogLevel: info + tracing: + address: jaeger.osm-system.svc.cluster.local + enable: false + endpoint: /api/v2/spans + port: 9411 + sidecar: + configResyncInterval: 0s + enablePrivilegedInitContainer: false + envoyImage: mcr.microsoft.com/oss/envoyproxy/envoy:v1.18.3 + initContainerImage: mcr.microsoft.com/oss/openservicemesh/init:v0.9.1 + logLevel: error + maxDataPlaneConnections: 0 + resources: {} + traffic: + enableEgress: true + enablePermissiveTrafficPolicyMode: true + inboundExternalAuthorization: + enable: false + failureModeAllow: false + statPrefix: inboundExtAuthz + timeout: 1s + useHTTPSIngress: false + ``` ++ Notice that `enablePermissiveTrafficPolicyMode` is configured to `true`. In OSM, permissive traffic policy mode bypasses [SMI](https://smi-spec.io/) traffic policy enforcement. In this mode, OSM automatically discovers services that are a part of the service mesh. The discovered services will have traffic policy rules programmed on each Envoy proxy sidecar to allow communications between these services. ++ > [!WARNING] + > Before you proceed, verify that your permissive traffic policy mode is set to `true`. If it isn't, change it to `true` using the following command: + > + > ```OSM Permissive Mode to True + > kubectl patch meshconfig osm-mesh-config -n kube-system -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":true}}}' --type=merge + >``` ## Clean up resources -When you no longer need the Azure resources, use the Azure CLI to delete the deployment's test resource group: +* When you no longer need the Azure resources, delete the deployment's test resource group using the [`az group delete`][az-group-delete] command. -```azurecli-interactive -az group delete --name osm-bicep-test -``` + ```azurecli-interactive + az group delete --name osm-bicep-test + ``` -Alternatively, you can uninstall the OSM add-on and the related resources from your cluster. For more information, see [Uninstall the Open Service Mesh add-on from your AKS cluster][osm-uninstall]. + Alternatively, you can uninstall the OSM add-on and the related resources from your cluster. For more information, see [Uninstall the Open Service Mesh add-on from your AKS cluster][osm-uninstall]. ## Next steps This article showed you how to install the OSM add-on on an AKS cluster and veri <!-- Links --> <!-- Internal -->--[az-feature-register]: /cli/azure/feature#az_feature_register -[az-feature-list]: /cli/azure/feature#az_feature_list -[az-provider-register]: /cli/azure/provider#az_provider_register -[az-extension-add]: /cli/azure/extension#az_extension_add -[az-extension-update]: /cli/azure/extension#az_extension_update [osm-uninstall]: open-service-mesh-uninstall-add-on.md [osm-deploy-sample-app]: https://release-v1-0.docs.openservicemesh.io/docs/getting_started/install_apps/ [osm-onboard-app]: https://release-v1-0.docs.openservicemesh.io/docs/guides/app_onboarding/+[az-deployment-group-create]: /cli/azure/deployment/group#az_deployment_group_create +[az-group-delete]: /cli/azure/group#az_group_delete |
aks | Windows Aks Partner Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-aks-partner-solutions.md | Title: Windows AKS Partner Solutions description: Find partner-tested solutions that enable you to build, test, deploy, manage and monitor your Windows-based apps on Windows containers on AKS. Previously updated : 08/04/2023 Last updated : 09/26/2023 # Windows AKS Partners Solutions Storage enables standardized and seamless storage interactions, ensuring high ap ![Logo of NetApp.](./media/windows-aks-partner-solutions/netapp.png) -Astra Control provides application data management for stateful workloads on Azure Kubernetes Service (AKS). Discover your apps and define protection policies that automatically back up workloads offsite. Protect, clone, and move applications across Kubernetes environments with ease. +Astra provides dynamic storage provisioning for stateful workloads on Azure Kubernetes Service (AKS). It also provides data protection using snapshots and clones. Provision SMB volumes through the Kubernetes control plane, making storage seamless and on-demand for all your Windows AKS workloads. -Follow the steps provided in [this blog](https://techcommunity.microsoft.com/t5/containers/persistent-storage-for-windows-containers-on-azure-kubernetes/ba-p/3836781) post to dynamically provision SMB volumes for Windows AKS workloads. +Follow the steps provided in [this blog](https://techcommunity.microsoft.com/t5/azure-architecture-blog/azure-netapp-files-smb-volumes-for-azure-kubernetes-services/ba-p/3052900) post to dynamically provision SMB volumes for Windows AKS workloads. ## Config management Automate and standardize the system settings across your environments to enhance Chef provides visibility and threat detection from build to runtime that monitors, audits, and remediates the security of your Azure cloud services and Kubernetes and Windows container assets. Chef provides comprehensive visibility and continuous compliance into your cloud security posture and helps limit the risk of misconfigurations in cloud-native environments by providing best practices based on CIS, STIG, SOC2, PCI-DSS and other benchmarks. This is part of a broader compliance offering that supports on-premises or hybrid cloud environments including applications deployed on the edge. -To learn more about ChefΓÇÖs capabilities, check out the comprehensive ΓÇÿhow-toΓÇÖ blog post here: [Securing Your Windows Environments Running on Azure Kubernetes Service with Chef](https://techcommunity.microsoft.com/t5/containers/securing-your-windows-environments-running-on-azure-kubernetes/ba-p/3821830). +To learn more about ChefΓÇÖs capabilities, check out the comprehensive ΓÇÿhow-toΓÇÖ blog post here: [Securing Your Windows Environments Running on Azure Kubernetes Service with Chef](https://techcommunity.microsoft.com/t5/containers/securing-your-windows-environments-running-on-azure-kubernetes/ba-p/3821830). |
app-service | App Service Key Vault References | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-key-vault-references.md | If your vault is configured with [network restrictions](../key-vault/general/ove 2. Make sure that the vault's configuration allows the network or subnet that your app uses to access it. --> [!NOTE] -> Windows container currently does not support key vault references over VNet Integration. -- ### Access vaults with a user-assigned identity Some apps need to reference secrets at creation time, when a system-assigned identity isn't available yet. In these cases, a user-assigned identity can be created and given access to the vault in advance. |
app-service | Quickstart Dotnetcore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-dotnetcore.md | If you have already installed Visual Studio 2022: :::zone-end ++- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet). +- The [Azure Developer CLI](/azure/developer/azure-developer-cli/install-azd) +- [The latest .NET 7.0 SDK.](https://dotnet.microsoft.com/download/dotnet/7.0) +++ ## 1. Create an ASP.NET web app +++## 1. Initialize the ASP.NET web app template ++ :::zone target="docs" pivot="development-environment-vs" ### [.NET 7.0](#tab/net70) In this step, you fork a demo project to deploy. :::zone-end -## 2. Publish your web app ++This quickstart uses the [Azure Developer CLI](/azure/developer/azure-developer-cli/overview) (`azd`) both to create Azure resources and deploy code to it. For more information about Azure Developer CLI, visit the [documentation](/azure/developer/azure-developer-cli/install-azd?tabs=winget-windows%2Cbrew-mac%2Cscript-linux&pivots=os-windows) or [training path](/training/paths/azure-developer-cli/). ++Retrieve and initialize [the ASP.NET Core web app template](https://github.com/Azure-Samples/quickstart-deploy-aspnet-core-app-service.git) for this quickstart using the following steps: ++1. Open a terminal window on your machine to an empty working directory. Initialize the `azd` template using the `azd init` command. ++ ```bash + azd init --template https://github.com/Azure-Samples/quickstart-deploy-aspnet-core-app-service.git + ``` + When prompted for an environment name, enter `dev`. + +2. From the same terminal session, run the application locally using the `dotnet run` command. Use the `--project` parameter to specify the `src` directory of the `azd` template, which is where the application code lives. ++ ```bash + dotnet run --project src --urls=https://localhost:5001/ + ``` -To publish your web app, you must first create and configure a new App Service that you can publish your app to. +3. Open a web browser and navigate to the app at `https://localhost:5001`. The ASP.NET Core 7.0 web app template is displayed on the page. ++ :::image type="content" source="media/quickstart-dotnetcore/local-web-app-net.png" alt-text="Screenshot of Visual Studio Code - ASP.NET Core 7.0 in local browser." lightbox="media/quickstart-dotnetcore/local-web-app-net.png" border="true"::: + ++## 2. Publish your web app -As part of setting up the App Service, you create: +The AZD template contains files that will generate the following required resources for your application to run in App service: - A new [resource group](../azure-resource-manager/management/overview.md#terminology) to contain all of the Azure resources for the service.-- A new [Hosting Plan](overview-hosting-plans.md) that specifies the location, size, and features of the web server farm that hosts your app.+- A new [App Service plan](overview-hosting-plans.md) that specifies the location, size, and features of the web server farm that hosts your app. +- A new [App Service app](overview-hosting-plans.md) instance to run the deployed application. Follow these steps to create your App Service resources and publish your project: Follow these steps to create your App Service resources and publish your project :::zone-end ++1. Sign into your Azure account by using the az login command and following the prompt: ++ ```bash + azd auth login + ``` ++2. Create the Azure resources and deploy your app using the `azd up` command: ++ ```bash + azd up + ``` ++ The `azd up` command might take a few minutes to complete. `azd up` uses the Bicep files in your projects to create the resource group, App Service Plan, and hosting app. It also performs certain configurations such as enabling logging and deploys your compiled app code. While it's running, the command provides messages about the provisioning and deployment process, including a link to the deployment in Azure. When it finishes, the command also displays a link to the deploy application. ++3. Open a web browser and navigate to the URL: ++ You see the ASP.NET Core 7.0 web app displayed in the page. ++ :::image type="content" source="media/quickstart-dotnetcore/browse-dotnet-70.png" lightbox="media/quickstart-dotnetcore/browse-dotnet-70.png" border="true" alt-text="Screenshot of the deployed .NET Framework 4.8 sample app."::: + + ## 3. Update the app and redeploy Follow these steps to update and redeploy your web app: You see the updated ASP.NET Core 7.0 web app displayed in the page. :::zone-end ++<!-- markdownlint-disable MD044 --> +<!-- markdownlint-enable MD044 --> ++In the local directory, open the *Index.cshtml* file. Replace the first `<div>` element: ++```razor +<div class="jumbotron"> + <h1>.NET 💜 Azure</h1> + <p class="lead">Example .NET app to Azure App Service.</p> +</div> +``` ++Save your changes, then redeploy the app using the `azd up` command again: ++```azurecli +azd up +``` ++`azd up` will skip the provisioning resources step this time and only redeploy your code, since there have been no changes to the Bicep files. ++Once deployment has completed, the browser will open to the updated ASP.NET Core 7.0 web app. +++ ## 4. Manage the Azure app To manage your web app, go to the [Azure portal](https://portal.azure.com), and search for and select **App Services**. The **Overview** page for your web app, contains options for basic management li :::zone-end <!-- markdownlint-enable MD044 --> +<!-- markdownlint-enable MD044 --> +<!-- markdownlint-enable MD044 --> + ## Next steps ### [.NET 7.0](#tab/net70) Advance to the next article to learn how to create a .NET Core app and connect i > [!div class="nextstepaction"] > [Configure ASP.NET Core app](configure-language-dotnetcore.md) +> [!div class="nextstepaction"] +> [Learn more about the Azure Developer CLI](/azure/developer/azure-developer-cli/overview) + ### [.NET Framework 4.8](#tab/netframework48) Advance to the next article to learn how to create a .NET Framework app and connect it to a SQL Database: |
application-gateway | Alb Controller Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/alb-controller-release-notes.md | Instructions for new or existing deployments of ALB Controller are found in the - [Upgrade existing ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md#for-existing-deployments) ## Latest Release (Recommended)-July 25, 2023 - 0.4.023971 - Ingress + Gateway co-existence improvements +September 25, 2023 - 0.5.024542 - Custom Health Probes, Controller HA, Multi-site support for Ingress, [helm_release via Terraform fix](https://github.com/Azure/AKS/issues/3857), Path rewrite for Gateway API, status for Ingress resources, quality improvements ## Release history+July 25, 2023 - 0.4.023971 - Ingress + Gateway co-existence improvements July 24, 2023 - 0.4.023961 - Improved Ingress support July 24, 2023 - 0.4.023921 - Initial release of ALB Controller |
application-gateway | Api Specification Kubernetes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/api-specification-kubernetes.md | why a particular condition type has been raised on the Application Gateway for C </tr> </thead> <tbody><tr><td><p>"Accepted"</p></td>-<td><p>AlbReasonAccepted indicates the Application Gateway for Containers resource +<td><p>AlbReasonAccepted indicates that the Application Gateway for Containers resource has been accepted by the controller.</p> </td> </tr><tr><td><p>"Ready"</p></td> is in the process of being created, updated or deleted.</p> <h3 id="alb.networking.azure.io/v1.AlbConditionType">AlbConditionType (<code>string</code> alias)</h3> <div>-<p>AlbConditionType is a type of condition associated with an Application Gateway for Containers resource. This type should be used with the AlbStatus.Conditions +<p>AlbConditionType is a type of condition associated with an +Application Gateway for Containers resource. This type should be used with the AlbStatus.Conditions field.</p> </div> <table> has been accepted by the controller.</p> (<em>Appears on:</em><a href="#alb.networking.azure.io/v1.ApplicationLoadBalancer">ApplicationLoadBalancer</a>) </p> <div>-<p>AlbSpec defines the specifications for Application Gateway for Containers resource.</p> +<p>AlbSpec defines the specifications for the Application Gateway for Containers resource.</p> </div> <table> <thead> vocabulary to describe BackendTLSPolicy state.</p> <p>Known condition types are:</p> <ul> <li>“Accepted”</li>-<li>“Ready”</li> </ul> </td> </tr> Kubernetes core API.</p> <td> <code>name</code><br/> <em>-string +<a href="https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1beta1.ObjectName"> +Gateway API .ObjectName +</a> </em> </td> <td> string <td> <code>kind</code><br/> <em>-string +<a href="https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1beta1.Kind"> +Gateway API .Kind +</a> </em> </td> <td> string </tr> <tr> <td>-<code>gateway</code><br/> -<em> -string -</em> -</td> -<td> -<p>Gateway is the name of the Gateway.</p> -</td> -</tr> -<tr> -<td> <code>listeners</code><br/> <em> []string vocabulary to describe FrontendTLSPolicy state.</p> <p>Known condition types are:</p> <ul> <li>“Accepted”</li>-<li>“Ready”</li> </ul> </td> </tr> When the given HealthCheckPolicy is correctly configured</p> </tr><tr><td><p>"InvalidHealthCheckPolicy"</p></td> <td><p>HealthCheckPolicyReasonInvalid is the reason when the HealthCheckPolicy isn't Accepted</p> </td>+</tr><tr><td><p>"InvalidPort"</p></td> +<td><p>HealthCheckPolicyReasonInvalidPort is used when the port is invalid</p> +</td> </tr><tr><td><p>"InvalidServiceReference"</p></td> <td><p>HealthCheckPolicyReasonInvalidServiceReference is used when the service is invalid</p> </td> field.</p> <h3 id="alb.networking.azure.io/v1.HealthCheckPolicyConfig">HealthCheckPolicyConfig </h3> <p>-(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.HealthCheckPolicySpec">HealthCheckPolicySpec</a>, <a href="#alb.networking.azure.io/v1.IngressBackendSettings">IngressBackendSettings</a>) +(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.HealthCheckPolicySpec">HealthCheckPolicySpec</a>) </p> <div> <p>HealthCheckPolicyConfig defines the schema for HealthCheck check specification</p> Protocol </tr> </tbody> </table>+<h3 id="alb.networking.azure.io/v1.IngressBackendSettingStatus">IngressBackendSettingStatus +</h3> +<p> +(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.IngressExtensionStatus">IngressExtensionStatus</a>) +</p> +<div> +<p>IngressBackendSettingStatus describes the state of a BackendSetting</p> +</div> +<table> +<thead> +<tr> +<th>Field</th> +<th>Description</th> +</tr> +</thead> +<tbody> +<tr> +<td> +<code>service</code><br/> +<em> +string +</em> +</td> +<td> +<p>Service identifies the BackendSetting this status describes</p> +</td> +</tr> +<tr> +<td> +<code>validationErrors</code><br/> +<em> +[]string +</em> +</td> +<td> +<em>(Optional)</em> +<p>Errors is a list of errors relating to this setting</p> +</td> +</tr> +<tr> +<td> +<code>valid</code><br/> +<em> +bool +</em> +</td> +<td> +<p>Valid indicates that there are no validation errors present on this BackendSetting</p> +</td> +</tr> +</tbody> +</table> <h3 id="alb.networking.azure.io/v1.IngressBackendSettings">IngressBackendSettings </h3> <p> string </td> <td> <em>(Optional)</em>-<p>TrustedRootCertificate can be used to supply a certificate for the gateway to trust when communciating to the +<p>TrustedRootCertificate can be used to supply a certificate for the gateway to trust when communicating to the backend on a port specified as https</p> </td> </tr> IngressTimeouts <p>Timeouts define a set of timeout parameters to be applied to an Ingress</p> </td> </tr>-<tr> -<td> -<code>healthCheck</code><br/> -<em> -<a href="#alb.networking.azure.io/v1.HealthCheckPolicyConfig"> -HealthCheckPolicyConfig -</a> -</em> -</td> -<td> -<em>(Optional)</em> -<p>HealthCheck defines a health probe which is used to determine if a backend is healthy</p> -</td> -</tr> </tbody> </table> <h3 id="alb.networking.azure.io/v1.IngressCertificate">IngressCertificate IngressExtensionSpec </table> </td> </tr>+<tr> +<td> +<code>status</code><br/> +<em> +<a href="#alb.networking.azure.io/v1.IngressExtensionStatus"> +IngressExtensionStatus +</a> +</em> +</td> +<td> +<em>(Optional)</em> +<p>Status describes the current state of the IngressExtension as enacted by the ALB controller</p> +</td> +</tr> </tbody> </table>+<h3 id="alb.networking.azure.io/v1.IngressExtensionConditionReason">IngressExtensionConditionReason +(<code>string</code> alias)</h3> +<div> +<p>IngressExtensionConditionReason defines the set of reasons that explain why a +particular IngressExtension condition type has been raised.</p> +</div> +<table> +<thead> +<tr> +<th>Value</th> +<th>Description</th> +</tr> +</thead> +<tbody><tr><td><p>"Accepted"</p></td> +<td><p>IngressExtensionReasonAccepted is used to set the IngressExtensionConditionAccepted to Accepted</p> +</td> +</tr><tr><td><p>"HasValidationErrors"</p></td> +<td><p>IngressExtensionReasonHasErrors indicates there are some validation errors</p> +</td> +</tr><tr><td><p>"NoValidationErrors"</p></td> +<td><p>IngressExtensionReasonNoErrors indicates there are no validation errors</p> +</td> +</tr><tr><td><p>"PartiallyAcceptedWithErrors"</p></td> +<td><p>IngressExtensionReasonPartiallyAccepted is used to set the IngressExtensionConditionAccepted to Accepted, but with non-fatal validation errors</p> +</td> +</tr></tbody> +</table> +<h3 id="alb.networking.azure.io/v1.IngressExtensionConditionType">IngressExtensionConditionType +(<code>string</code> alias)</h3> +<div> +<p>IngressExtensionConditionType is a type of condition associated with a +IngressExtension. This type should be used with the IngressExtensionStatus.Conditions +field.</p> +</div> +<table> +<thead> +<tr> +<th>Value</th> +<th>Description</th> +</tr> +</thead> +<tbody><tr><td><p>"Accepted"</p></td> +<td><p>IngressExtensionConditionAccepted indicates if the IngressExtension has been accepted (reconciled) by the controller</p> +</td> +</tr><tr><td><p>"Errors"</p></td> +<td><p>IngressExtensionConditionErrors indicates if there are validation or build errors on the extension</p> +</td> +</tr></tbody> +</table> <h3 id="alb.networking.azure.io/v1.IngressExtensionSpec">IngressExtensionSpec </h3> <p> IngressExtensionSpec </tr> </tbody> </table>+<h3 id="alb.networking.azure.io/v1.IngressExtensionStatus">IngressExtensionStatus +</h3> +<p> +(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.IngressExtension">IngressExtension</a>) +</p> +<div> +<p>IngressExtensionStatus describes the current state of the IngressExtension</p> +</div> +<table> +<thead> +<tr> +<th>Field</th> +<th>Description</th> +</tr> +</thead> +<tbody> +<tr> +<td> +<code>listenerSettings</code><br/> +<em> +<a href="#alb.networking.azure.io/v1.IngressListenerSettingStatus"> +[]IngressListenerSettingStatus +</a> +</em> +</td> +<td> +<em>(Optional)</em> +<p>ListenerSettings has detailed status information regarding each ListenerSetting</p> +</td> +</tr> +<tr> +<td> +<code>backendSettings</code><br/> +<em> +<a href="#alb.networking.azure.io/v1.IngressBackendSettingStatus"> +[]IngressBackendSettingStatus +</a> +</em> +</td> +<td> +<em>(Optional)</em> +<p>BackendSettings has detailed status information regarding each BackendSettings</p> +</td> +</tr> +<tr> +<td> +<code>conditions</code><br/> +<em> +<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Condition"> +[]Kubernetes meta/v1.Condition +</a> +</em> +</td> +<td> +<em>(Optional)</em> +<p>Conditions describe the current conditions of the IngressExtension. +Known condition types are:</p> +<ul> +<li>“Accepted”</li> +<li>“Errors”</li> +</ul> +</td> +</tr> +</tbody> +</table> <h3 id="alb.networking.azure.io/v1.IngressListenerPort">IngressListenerPort </h3> <p> IngressListenerTLS </tr> </tbody> </table>+<h3 id="alb.networking.azure.io/v1.IngressListenerSettingStatus">IngressListenerSettingStatus +</h3> +<p> +(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.IngressExtensionStatus">IngressExtensionStatus</a>) +</p> +<div> +<p>IngressListenerSettingStatus describes the state of a listener setting</p> +</div> +<table> +<thead> +<tr> +<th>Field</th> +<th>Description</th> +</tr> +</thead> +<tbody> +<tr> +<td> +<code>host</code><br/> +<em> +string +</em> +</td> +<td> +<p>Host identifies the listenerSetting this status describes</p> +</td> +</tr> +<tr> +<td> +<code>validationErrors</code><br/> +<em> +[]string +</em> +</td> +<td> +<em>(Optional)</em> +<p>Errors is a list of errors relating to this setting</p> +</td> +</tr> +<tr> +<td> +<code>valid</code><br/> +<em> +bool +</em> +</td> +<td> +<em>(Optional)</em> +<p>Valid indicates that there are no validation errors present on this listenerSetting</p> +</td> +</tr> +</tbody> +</table> <h3 id="alb.networking.azure.io/v1.IngressListenerTLS">IngressListenerTLS </h3> <p> IngressCertificate </em> </td> <td>+<em>(Optional)</em> <p>Certificate specifies a TLS Certificate to configure a Listener with</p> </td> </tr> field.</p> (<em>Appears on:</em><a href="#alb.networking.azure.io/v1.RoutePolicySpec">RoutePolicySpec</a>) </p> <div>-<p>RoutePolicyConfig defines the schema for RoutePolicy specification. This allows the specification of the following attributes: +<p>RoutePolicyConfig defines the schema for RoutePolicy specification. +This allows the specification of the following attributes: * Timeouts * Session Affinity</p> </div> int32 </td> <td> <em>(Optional)</em>-<p>Start defines the start of the range of status codes to use for HealthCheck checks.</p> +<p>Start defines the start of the range of status codes to use for HealthCheck checks. +This is inclusive.</p> </td> </tr> <tr> int32 </td> <td> <em>(Optional)</em>-<p>End defines the end of the range of status codes to use for HealthCheck checks.</p> +<p>End defines the end of the range of status codes to use for HealthCheck checks. +This is inclusive.</p> </td> </tr> </tbody> |
application-gateway | Custom Health Probe | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/custom-health-probe.md | + + Title: Custom health probe for Azure Application Gateway for Containers +description: Learn how to configure a custom health probe for Azure Application Gateway for Containers. +++++ Last updated : 09/25/2023++++# Custom health probe for Application Gateway for Containers ++Application Gateway for Containers monitors the health of all backend targets by default. As backend targets become healthy or unhealthy, Application Gateway for Containers only distributes traffic to healthy endpoints. ++In addition to using default health probe monitoring, you can also customize the health probe to suit your application's requirements. This article discusses both default and custom health probes. ++The order and logic of health probing is as follows: +1. Use definition of HealthCheckPolicy Custom Resource (CR). +2. If there's no HealthCheckPolicy CR, then use [Readiness probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes) +3. If there's no Readiness probe defined, use the [default health probe](#default-health-probe) ++The following properties make up custom health probes: ++| Property | Default Value | +| -- | - | +| port | the port number to initiate health probes to. Valid port values are 1-65535. | +| interval | how often in seconds health probes should be sent to the backend target. The minimum interval must be > 0 seconds. | +| timeout | how long in seconds the request should wait until it's deemed a failure The minimum interval must be > 0 seconds. | +| healthyThreshold | number of health probes before marking the target endpoint healthy. The minimum interval must be > 0. | +| unhealthyTreshold | number of health probes to fail before the backend target should be labeled unhealthy. The minimum interval must be > 0. | +| protocol| specifies either non-encrypted `HTTP` traffic or encrypted traffic via TLS as `HTTPS` | +| (http) host | the hostname specified in the request to the backend target. | +| (http) path | the specific path of the request. If a single file should be loaded, the path may be /https://docsupdatetracker.net/index.html as an example. | +| (http -> match) statusCodes | Contains two properties, `start` and `end`, that define the range of valid HTTP status codes returned from the backend. | ++## Default health probe +Application Gateway for Containers automatically configures a default health probe when you don't define a custom probe configuration or configure a readiness probe. The monitoring behavior works by making an HTTP GET request to the IP addresses of configured backend targets. For default probes, if the backend target is configured for HTTPS, the probe uses HTTPS to test health of the backend targets. ++For more implementation details, see [HealthCheckPolicyConfig](api-specification-kubernetes.md#alb.networking.azure.io/v1.HealthCheckPolicyConfig) in the API specification. ++When the default health probe is used, the following values for each health probe property are used: ++| Property | Default Value | +| -- | - | +| interval | 5 seconds | +| timeout | 30 seconds | +| healthyTrehshold | 1 probe | +| unhealthyTreshold | 3 probes | +| port | 80 for HTTP and 443 for HTTPS to the backend | +| protocol | HTTP for HTTP and HTTPS when TLS is specified | +| (http) host | localhost | +| (http) path | / | ++## Custom health probe ++In both Gateway API and Ingress API, a custom health probe can be defined by defining a [_HealthCheckPolicyPolicy_ resource](api-specification-kubernetes.md#alb.networking.azure.io/v1.HealthCheckPolicy) and referencing a service the health probes should check against. As the service is referenced by an HTTPRoute or Ingress resource with a class reference to Application Gateway for Containers, the custom health probe will be used for each reference. ++In this example, the health probe emitted by Application Gateway for Containers will send the hostname contoso.com to the pods that make up _test-service_. The request path will be `/`, a probe will be emitted every 5 seconds and wait 3 seconds before determining the connection has timed out. If a response is received, an HTTP response code between 200 and 299 (inclusive of 200 and 299) will be considered healthy, all other responses will be considered unhealthy. ++```bash +kubectl apply -f - <<EOF +apiVersion: alb.networking.azure.io/v1 +kind: HealthCheckPolicy +metadata: + name: gateway-health-check-policy + namespace: test-infra +spec: + targetRef: + group: "" + kind: Service + name: test-service + namespace: test-infra + default: + interval: 5s + timeout: 3s + healthyThreshold: 1 + unhealthyThreshold: 1 + protocol: HTTP + http: + host: contoso.com + path: / + match: + statusCodes: + - start: 200 + end: 299 +EOF +``` ++ |
application-gateway | How To Multiple Site Hosting Ingress Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-multiple-site-hosting-ingress-api.md | + + Title: Multi-site hosting with Application Gateway for Containers - Ingress API (preview) +description: Learn how to host multiple sites with Application Gateway for Containers using the Ingress API. +++++ Last updated : 09/25/2023++++# Multi-site hosting with Application Gateway for Containers - Ingress API (preview) ++This document helps you set up an example application that uses the Ingress API to demonstrate hosting multiple sites on the same Kubernetes Ingress resource / Application Gateway for Containers frontend. Steps are provided to: +- Create an [Ingress](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#ingressrule-v1-networking-k8s-io) resource with two hosts. ++## Background ++Application Gateway for Containers enables multi-site hosting by allowing you to configure more than one web application on the same port. Two or more unique sites can be hosted using unique backend services. See the following example scenario: ++![A diagram showing multisite hosting with Application Gateway for Containers.](./media/how-to-multiple-site-hosting-ingress-api/multiple-site-hosting.png) ++## Prerequisites ++> [!IMPORTANT] +> Application Gateway for Containers is currently in PREVIEW.<br> +> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++1. If you follow the BYO deployment strategy, ensure you have set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) +2. If you follow the ALB managed deployment strategy, ensure you have provisioned your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and provisioned the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md). +3. Deploy sample HTTP application + Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate path, query, and header based routing. + ```bash + kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/traffic-split-scenario/deployment.yaml + ``` + + This command creates the following on your cluster: + - a namespace called `test-infra` + - 2 services called `backend-v1` and `backend-v2` in the `test-infra` namespace + - 2 deployments called `backend-v1` and `backend-v2` in the `test-infra` namespace ++## Deploy the required Ingress resource ++# [ALB managed deployment](#tab/alb-managed) ++1. Create an Ingress +```bash +kubectl apply -f - <<EOF +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: ingress-01 + namespace: test-infra + annotations: + alb.networking.azure.io/alb-name: alb-test + alb.networking.azure.io/alb-namespace: alb-test-infra +spec: + ingressClassName: azure-alb-external + rules: + - host: contoso.com + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: backend-v1 + port: + number: 8080 + - host: fabrikam.com + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: backend-v2 + port: + number: 8080 +EOF +``` +++# [Bring your own (BYO) deployment](#tab/byo) ++1. Set the following environment variables ++```bash +RESOURCE_GROUP='<resource group name of the Application Gateway For Containers resource>' +RESOURCE_NAME='alb-test' ++RESOURCE_ID=$(az network alb show --resource-group $RESOURCE_GROUP --name $RESOURCE_NAME --query id -o tsv) +FRONTEND_NAME='frontend' +``` ++2. Create an Ingress +```bash +kubectl apply -f - <<EOF +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: ingress-01 + namespace: test-infra + annotations: + alb.networking.azure.io/alb-id: $RESOURCE_ID + alb.networking.azure.io/alb-frontend: $FRONTEND_NAME +spec: + ingressClassName: azure-alb-external + rules: + - host: contoso.com + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: backend-v1 + port: + number: 8080 + - host: fabrikam.com + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: backend-v2 + port: + number: 8080 +EOF +``` ++++Once the ingress resource has been created, ensure the status shows the hostname of your load balancer and that both ports are listening for requests. +```bash +kubectl get ingress ingress-01 -n test-infra -o yaml +``` ++Example output of successful gateway creation. +```yaml +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + annotations: + alb.networking.azure.io/alb-frontend: FRONTEND_NAME + alb.networking.azure.io/alb-id: /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/yyyyyyyy/providers/Microsoft.ServiceNetworking/trafficControllers/zzzzzz + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{"alb.networking.azure.io/alb-frontend":"FRONTEND_NAME","alb.networking.azure.io/alb-id":"/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/yyyyyyyy/providers/Microsoft.ServiceNetworking/trafficControllers/zzzzzz"},"name" +:"ingress-01","namespace":"test-infra"},"spec":{"ingressClassName":"azure-alb-external","rules":[{"host":"example.com","http":{"paths":[{"backend":{"service":{"name":"echo","port":{"number":80}}},"path":"/","pathType":"Prefix"}]}}],"tls":[{"hosts":["example.com"],"secretName":"listener-tls-secret"}]}} + creationTimestamp: "2023-07-22T18:02:13Z" + generation: 2 + name: ingress-01 + namespace: test-infra + resourceVersion: "278238" + uid: 17c34774-1d92-413e-85ec-c5a8da45989d +spec: + ingressClassName: azure-alb-external + rules: + - host: contoso.com + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: backend-v1 + port: + number: 8080 + - host: fabrikam.com + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: backend-v2 + port: + number: 8080 +status: + loadBalancer: + ingress: + - hostname: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.fzyy.alb.azure.com + ports: + - port: 80 + protocol: TCP +``` ++## Test access to the application ++Now we're ready to send some traffic to our sample application, via the FQDN assigned to the frontend. Use the following command to get the FQDN. ++```bash +fqdn=$(kubectl get ingress ingress-01 -n test-infra -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'') +``` ++Next, specify the server name indicator using the curl command, `contoso.com` for the frontend FQDN should return a response from the backend-v1 service. ++```bash +fqdnIp=$(dig +short $fqdn) +curl -k --resolve contoso.com:80:$fqdnIp http://contoso.com +``` ++Via the response we should see: +```json +{ + "path": "/", + "host": "contoso.com", + "method": "GET", + "proto": "HTTP/1.1", + "headers": { + "Accept": [ + "*/*" + ], + "User-Agent": [ + "curl/7.81.0" + ], + "X-Forwarded-For": [ + "xxx.xxx.xxx.xxx" + ], + "X-Forwarded-Proto": [ + "http" + ], + "X-Request-Id": [ + "dcd4bcad-ea43-4fb6-948e-a906380dcd6d" + ] + }, + "namespace": "test-infra", + "ingress": "", + "service": "", + "pod": "backend-v1-5b8fd96959-f59mm" +} +``` ++Next, specify server name indicator using the curl command, `contoso.com` for the frontend FQDN should return a response from the backend-v1 service. ++```bash +fqdnIp=$(dig +short $fqdn) +curl -k --resolve fabrikam.com:80:$fqdnIp http://fabrikam.com +``` ++Via the response we should see: +```json +{ + "path": "/", + "host": "fabrikam.com", + "method": "GET", + "proto": "HTTP/1.1", + "headers": { + "Accept": [ + "*/*" + ], + "User-Agent": [ + "curl/7.81.0" + ], + "X-Forwarded-For": [ + "xxx.xxx.xxx.xxx" + ], + "X-Forwarded-Proto": [ + "http" + ], + "X-Request-Id": [ + "adae8cc1-8030-4d95-9e05-237dd4e3941b" + ] + }, + "namespace": "test-infra", + "ingress": "", + "service": "", + "pod": "backend-v2-594bd59865-ppv9w" +} +``` ++Congratulations, you've installed ALB Controller, deployed a backend application, and routed traffic to two different backend services using different hostnames with the Ingress API on Application Gateway for Containers. |
application-gateway | How To Url Rewrite Gateway Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-url-rewrite-gateway-api.md | + + Title: URL Rewrite for Azure Application Gateway for Containers - Gateway API +description: Learn how to rewrite URLs in Gateway API for Application Gateway for Containers. +++++ Last updated : 09/25/2023++++# URL Rewrite for Azure Application Gateway for Containers - Gateway API (preview) ++Application Gateway for Containers allows you to rewrite the URL of a client request, including the requests' hostname and/or path. When Application Gateway for Containers initiates the request to the backend target, the request contains the newly rewritten URL to initiate the request. +++## Usage details ++URL Rewrites take advantage of [filters](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1beta1.HTTPURLRewriteFilter) as defined by Kubernetes Gateway API. ++## Background +URL rewrite enables you to translate an incoming request to a different URL when initiated to a backend target. ++See the following figure, which illustrates an example of a request destined for _contoso.com/shop_ being rewritten to _contoso.com/ecommerce_ when the request is initiated to the backend target by Application Gateway for Containers: ++[ ![A diagram showing the Application Gateway for Containers rewriting a URL to the backend.](./media/how-to-url-rewrite-gateway-api/url-rewrite.png) ](./media/how-to-url-rewrite-gateway-api/url-rewrite.png#lightbox) +++## Prerequisites ++> [!IMPORTANT] +> Application Gateway for Containers is currently in PREVIEW.<br> +> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++1. If following the BYO deployment strategy, ensure you have set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) +2. If following the ALB managed deployment strategy, ensure you have provisioned your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and provisioned the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md). +3. Deploy sample HTTP application + Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate path, query, and header based routing. + ```bash + kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/traffic-split-scenario/deployment.yaml + ``` + + This command creates the following on your cluster: + - a namespace called `test-infra` + - 2 services called `backend-v1` and `backend-v2` in the `test-infra` namespace + - 2 deployments called `backend-v1` and `backend-v2` in the `test-infra` namespace ++## Deploy the required Gateway API resources ++# [ALB managed deployment](#tab/alb-managed) ++1. Create a Gateway +```bash +kubectl apply -f - <<EOF +apiVersion: gateway.networking.k8s.io/v1beta1 +kind: Gateway +metadata: + name: gateway-01 + namespace: test-infra + annotations: + alb.networking.azure.io/alb-namespace: alb-test-infra + alb.networking.azure.io/alb-name: alb-test +spec: + gatewayClassName: azure-alb-external + listeners: + - name: http-listener + port: 80 + protocol: HTTP + allowedRoutes: + namespaces: + from: Same +EOF +``` +++# [Bring your own (BYO) deployment](#tab/byo) ++1. Set the following environment variables ++```bash +RESOURCE_GROUP='<resource group name of the Application Gateway For Containers resource>' +RESOURCE_NAME='alb-test' ++RESOURCE_ID=$(az network alb show --resource-group $RESOURCE_GROUP --name $RESOURCE_NAME --query id -o tsv) +FRONTEND_NAME='frontend' +``` ++2. Create a Gateway +```bash +kubectl apply -f - <<EOF +apiVersion: gateway.networking.k8s.io/v1beta1 +kind: Gateway +metadata: + name: gateway-01 + namespace: test-infra + annotations: + alb.networking.azure.io/alb-id: $RESOURCE_ID +spec: + gatewayClassName: azure-alb-external + listeners: + - name: http-listener + port: 80 + protocol: HTTP + allowedRoutes: + namespaces: + from: Same + addresses: + - type: alb.networking.azure.io/alb-frontend + value: $FRONTEND_NAME +EOF +``` ++++Once the gateway resource has been created, ensure the status is valid, the listener is _Programmed_, and an address is assigned to the gateway. +```bash +kubectl get gateway gateway-01 -n test-infra -o yaml +``` ++Example output of successful gateway creation. +```yaml +status: + addresses: + - type: IPAddress + value: xxxx.yyyy.alb.azure.com + conditions: + - lastTransitionTime: "2023-06-19T21:04:55Z" + message: Valid Gateway + observedGeneration: 1 + reason: Accepted + status: "True" + type: Accepted + - lastTransitionTime: "2023-06-19T21:04:55Z" + message: Application Gateway For Containers resource has been successfully updated. + observedGeneration: 1 + reason: Programmed + status: "True" + type: Programmed + listeners: + - attachedRoutes: 0 + conditions: + - lastTransitionTime: "2023-06-19T21:04:55Z" + message: "" + observedGeneration: 1 + reason: ResolvedRefs + status: "True" + type: ResolvedRefs + - lastTransitionTime: "2023-06-19T21:04:55Z" + message: Listener is accepted + observedGeneration: 1 + reason: Accepted + status: "True" + type: Accepted + - lastTransitionTime: "2023-06-19T21:04:55Z" + message: Application Gateway For Containers resource has been successfully updated. + observedGeneration: 1 + reason: Programmed + status: "True" + type: Programmed + name: https-listener + supportedKinds: + - group: gateway.networking.k8s.io + kind: HTTPRoute +``` ++Once the gateway has been created, create an HTTPRoute resources for `contoso.com`. This example ensures traffic sent to `contoso.com/shop` is initiated as `contoso.com/ecommerce` to the backend target. ++```bash +kubectl apply -f - <<EOF +apiVersion: gateway.networking.k8s.io/v1beta1 +kind: HTTPRoute +metadata: + name: rewrite-example + namespace: test-infra +spec: + parentRefs: + - name: gateway-01 + hostnames: + - "contoso.com" + rules: + - matches: + - path: + type: PathPrefix + value: /shop + - filters: + - type: URLRewrite + URLRewrite: + path: + type: ReplacePrefixMatch + replacePrefixMatch: /ecommerce + backendRefs: + - name: backend-v1 + port: 8080 + - backendRefs: + - name: backend-v2 + port: 8080 +EOF +``` ++Once the HTTPRoute resource has been created, ensure the HTTPRoute resource shows _Accepted_ and the Application Gateway for Containers resource has been _Programmed_. +```bash +kubectl get httproute rewrite-example -n test-infra -o yaml +``` ++Verify the status of the Application Gateway for Containers resource has been successfully updated for each HTTPRoute. ++```yaml +status: + parents: + - conditions: + - lastTransitionTime: "2023-06-19T22:18:23Z" + message: "" + observedGeneration: 1 + reason: ResolvedRefs + status: "True" + type: ResolvedRefs + - lastTransitionTime: "2023-06-19T22:18:23Z" + message: Route is Accepted + observedGeneration: 1 + reason: Accepted + status: "True" + type: Accepted + - lastTransitionTime: "2023-06-19T22:18:23Z" + message: Application Gateway For Containers resource has been successfully updated. + observedGeneration: 1 + reason: Programmed + status: "True" + type: Programmed + controllerName: alb.networking.azure.io/alb-controller + parentRef: + group: gateway.networking.k8s.io + kind: Gateway + name: gateway-01 + namespace: test-infra + ``` ++## Test access to the application ++Now we're ready to send some traffic to our sample application, via the FQDN assigned to the frontend. Use the following command to get the FQDN. ++```bash +fqdn=$(kubectl get gateway gateway-01 -n test-infra -o jsonpath='{.status.addresses[0].value}') +``` ++Specifying server name indicator using the curl command, `contoso.com/shop` should return a response from the backend-v1 service with the requested path to the backend target showing `contoso.com/ecommerce`. ++```bash +fqdnIp=$(dig +short $fqdn) +curl -k --resolve contoso.com:80:$fqdnIp http://contoso.com/shop +``` ++Via the response we should see: +```json +{ + "path": "/ecommerce", + "host": "contoso.com", + "method": "GET", + "proto": "HTTP/1.1", + "headers": { + "Accept": [ + "*/*" + ], + "User-Agent": [ + "curl/7.81.0" + ], + "X-Forwarded-For": [ + "xxx.xxx.xxx.xxx" + ], + "X-Forwarded-Proto": [ + "http" + ], + "X-Request-Id": [ + "dcd4bcad-ea43-4fb6-948e-a906380dcd6d" + ] + }, + "namespace": "test-infra", + "ingress": "", + "service": "", + "pod": "backend-v1-5b8fd96959-f59mm" +} +``` ++Specifying server name indicator using the curl command, `contoso.com` should return a response from the backend-v2 service. ++```bash +fqdnIp=$(dig +short $fqdn) +curl -k --resolve contoso.com:80:$fqdnIp http://contoso.com +``` ++Via the response we should see: +```json +{ + "path": "/", + "host": "contoso.com", + "method": "GET", + "proto": "HTTP/1.1", + "headers": { + "Accept": [ + "*/*" + ], + "User-Agent": [ + "curl/7.81.0" + ], + "X-Forwarded-For": [ + "xxx.xxx.xxx.xxx" + ], + "X-Forwarded-Proto": [ + "http" + ], + "X-Request-Id": [ + "adae8cc1-8030-4d95-9e05-237dd4e3941b" + ] + }, + "namespace": "test-infra", + "ingress": "", + "service": "", + "pod": "backend-v2-594bd59865-ppv9w" +} +``` ++Congratulations, you have installed ALB Controller, deployed a backend application and used filtering to rewrite the client requested URL, prior to traffic being set to the target on Application Gateway for Containers. |
application-gateway | Quickstart Create Application Gateway For Containers Managed By Alb Controller | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md | |
application-gateway | Quickstart Deploy Application Gateway For Containers Alb Controller | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/quickstart-deploy-application-gateway-for-containers-alb-controller.md | You need to complete the following tasks prior to deploying Application Gateway > [!NOTE] > The AKS cluster needs to be in a [region where Application Gateway for Containers is available](overview.md#supported-regions) > AKS cluster should use [Azure CNI](../../aks/configure-azure-cni.md).- > AKS cluster should have the workload identity feature enabled. [Learn how](../../aks/workload-identity-deploy-cluster.md#update-an-existing-aks-cluster) to enable and use an existing AKS cluster section. + > AKS cluster should have the workload identity feature enabled. [Learn how](../../aks/workload-identity-deploy-cluster.md#update-an-existing-aks-cluster) to enable workload identity on an existing AKS cluster. If using an existing cluster, ensure you enable Workload Identity support on your AKS cluster. Workload identities can be enabled via the following: You need to complete the following tasks prior to deploying Application Gateway 2. Install ALB Controller using Helm ### For new deployments+ + To install ALB Controller, use the `helm install` command. ++ When the `helm install` command is run, it will deploy the helm chart to the _default_ namespace. When alb-controller is deployed, it will deploy to the _azure-alb-system_ namespace. Both of these namespaces may be overridden independently as desired. To override the namespace the helm chart is deployed to, you may specify the --namespace (or -n) parameter. To override the _azure-alb-system_ namespace used by alb-controller, you may set the albController.namespace property during installation (`--set albController.namespace`). If neither the `--namespace` or `--set albController.namespace` parameters are defined, the _default_ namespace will be used for the helm chart and the _azure-alb-system_ namespace will be used for the ALB controller components. Lastly, if the namespace for the helm chart resource is not yet defined, ensure the `--create-namespace` parameter is also specified along with the `--namespace` or `-n` parameters. + ALB Controller can be installed by running the following commands: ```azurecli-interactive az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME helm install alb-controller oci://mcr.microsoft.com/application-lb/charts/alb-controller \- --version 0.4.023971 \ + --namespace <helm-resource-namespace> \ + --version 0.5.024542 \ + --set albController.namespace=<alb-controller-namespace> \ --set albController.podIdentity.clientID=$(az identity show -g $RESOURCE_GROUP -n azure-alb-identity --query clientId -o tsv) ``` - > [!Note] - > ALB Controller will automatically be provisioned into a namespace called azure-alb-system. The namespace name may be changed by defining the _--namespace <namespace_name>_ parameter when executing the helm command. During upgrade, please ensure you specify the --namespace parameter. ### For existing deployments- ALB can be upgraded by running the following commands (ensure you add the `--namespace namespace_name` parameter to define the namespace if the previous installation did not use the namespace _azure-alb-system_): + ALB can be upgraded by running the following commands: + + > [!Note] + > During upgrade, please ensure you specify the `--namespace` or `--set albController.namespace` parameters if the namespaces were overridden in the previously installed installation. To determine the previous namespaces used, you may run the `helm list` command for the helm namespace and `kubectl get pod -A -l app=alb-controller` for the ALB controller. + ```azurecli-interactive az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME helm upgrade alb-controller oci://mcr.microsoft.com/application-lb/charts/alb-controller \- --version 0.4.023971 \ - --set albController.podIdentity.clientID=$(az identity show -g $RESOURCE_GROUP -n azure-alb-identity --query clientId -o tsv) + --namespace <helm-resource-namespace> \ + --version 0.5.024542 \ + --set albController.podIdentity.clientID=$(az identity show -g $RESOURCE_GROUP -n azure-alb-identity --query clientId -o tsv) ``` ### Verify the ALB Controller installation You need to complete the following tasks prior to deploying Application Gateway | NAME | READY | STATUS | RESTARTS | AGE | | - | -- | - | -- | - | | alb-controller-bootstrap-6648c5d5c-hrmpc | 1/1 | Running | 0 | 4d6h |+ | alb-controller-6648c5d5c-sdd9t | 1/1 | Running | 0 | 4d6h | | alb-controller-6648c5d5c-au234 | 1/1 | Running | 0 | 4d6h | 2. Verify GatewayClass `azure-application-lb` is installed on your cluster: You need to complete the following tasks prior to deploying Application Gateway ```azurecli-interactive kubectl get gatewayclass azure-alb-external -o yaml ```+ You should see that the GatewayClass has a condition that reads **Valid GatewayClass** . This indicates that a default GatewayClass has been set up and that any gateway resources that reference this GatewayClass is managed by ALB Controller automatically.+ ```output + apiVersion: gateway.networking.k8s.io/v1beta1 + kind: GatewayClass + metadata: + creationTimestamp: "2023-07-31T13:07:00Z" + generation: 1 + name: azure-alb-external + resourceVersion: "64270" + uid: 6c1443af-63e6-4b79-952f-6c3af1f1c41e + spec: + controllerName: alb.networking.azure.io/alb-controller + status: + conditions: + - lastTransitionTime: "2023-07-31T13:07:23Z" + message: Valid GatewayClass + observedGeneration: 1 + reason: Accepted + status: "True" + type: Accepted + ``` ## Next Steps |
application-gateway | Troubleshooting Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/troubleshooting-guide.md | -Learn how to troubleshoot common problems in Application Gateway for Containers. +This article provides some guidance to help you troubleshoot common problems in Application Gateway for Containers. ++## Find the version of ALB Controller ++Before you start troubleshooting, determine the version of ALB Controller that is deployed. You can determine which version of ALB Controller is running by using the following _kubectl_ command (ensure you substitute your namespace if not using the default namespace of `azure-alb-system`): ++```bash +kubectl get deployment -n azure-alb-system -o wide +``` +Example output: ++| NAME | READY | UP-TO-DATE | AVAILABLE | AGE | CONTAINERS | IMAGES | SELECTOR | +| | -- | - | | - | -- | - | -- | +| alb-controller | 2/2 | 2 | 2 | 18d | alb-controller | mcr.microsoft.com/application-lb/images/alb-controller:**0.5.024542** | app=alb-controller | +| alb-controller-bootstrap | 1/1 | 1 | 1 | 18d | alb-controller-bootstrap | mcr.microsoft.com/application-lb/images/alb-controller-bootstrap:**0.5.024542** | app=alb-controller-bootstrap | ++In this example, the ALB controller version is **0.5.024542**. ++The ALB Controller version can be upgraded by running the `helm upgrade alb-controller` command. For more information, see [Install the ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md#install-the-alb-controller). ++> [!Tip] +> The latest ALB Controller version can be found in the [ALB Controller release notes](alb-controller-release-notes.md#latest-release-recommended). ## Collect ALB Controller logs-Logs may be collected from the ALB Controller by using the _kubectl logs_ command referencing the ALB Controller pod. +Logs can be collected from the ALB Controller by using the _kubectl logs_ command referencing the ALB Controller pod. 1. Get the running ALB Controller pod name - Execute the following kubectl command: + Run the following kubectl command. Ensure you substitute your namespace if not using the default namespace of `azure-alb-system`: ```bash kubectl get pods -n azure-alb-system ``` - You should see the following (pod names may differ slightly from the following table): + You should see output similar to the following example. Pod names might differ slightly. | NAME | READY | STATUS | RESTARTS | AGE | | - | -- | - | -- | - |- | alb-controller-bootstrap-6648c5d5c-hrmpc | 1/1 | Running | 0 | 4d6h | + | alb-controller-6648c5d5c-sdd9t | 1/1 | Running | 0 | 4d6h | | alb-controller-6648c5d5c-au234 | 1/1 | Running | 0 | 4d6h |+ | alb-controller-bootstrap-6648c5d5c-hrmpc | 1/1 | Running | 0 | 4d6h | ++ ALB controller uses an election provided by controller-runtime manager to determine an active and standby pod for high availability. + + Copy the name of each alb-controller pod (not the bootstrap pod, in this case, `alb-controller-6648c5d5c-sdd9t` and `alb-controller-6648c5d5c-au234`) and run the following command to determine the active pod. ++ # [Linux](#tab/active-pod-linux) + ```bash + kubectl logs alb-controller-6648c5d5c-sdd9t -n azure-alb-system -c alb-controller | grep "successfully acquired lease" + ``` ++ # [Windows](#tab/active-pod-windows) + ```cli + kubectl logs alb-controller-6648c5d5c-sdd9t -n azure-alb-system -c alb-controller| findstr "successfully acquired lease" + ``` + - Copy the name of the alb-controller pod (not the bootstrap pod, in this case, alb-controller-6648c5d5c-au234). + You should see the following if the pod is primary: `successfully acquired lease azure-alb-system/alb-controller-leader-election` 2. Collect the logs Logs from ALB Controller will be returned in JSON format. Execute the following kubectl command, replacing the name with the pod name returned in step 1: ```bash- kubectl logs -n azure-alb-system alb-controller-6648c5d5c-au234 + kubectl logs -n azure-alb-system alb-controller-6648c5d5c-sdd9t ``` Similarly, you can redirect the output of the existing command to a file by specifying the greater than (>) sign and the filename to write the logs to: ```bash- kubectl logs -n azure-alb-system alb-controller-6648c5d5c-au234 > alb-controller-logs.json + kubectl logs -n azure-alb-system alb-controller-6648c5d5c-sdd9t > alb-controller-logs.json ``` ## Configuration errors |
application-gateway | Quick Create Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-terraform.md | + + Title: 'Quickstart: Direct web traffic using Terraform' ++description: In this quickstart, you learn how to use Terraform to create an Azure Application Gateway that directs web traffic to virtual machines in a backend pool. +++ Last updated : 09/26/2023++++content_well_notification: + - AI-contribution +++# Quickstart: Direct web traffic with Azure Application Gateway - Terraform ++In this quickstart, you use Terraform to create an Azure Application Gateway. Then you test the application gateway to make sure it works correctly. +++> [!div class="checklist"] +> * Create an Azure resource group using [azurerm_resource_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group) +> * Create an Azure Virtual Network using [azurerm_virtual_network](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_network) +> * Create an Azure subnet using [azurerm_subnet](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/subnet) +> * Create an Azure public IP using [azurerm_public_ip](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/public_ip) +> * Create an Azure Application Gateway using [azurerm_application_gateway](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/application_gateway) +> * Create an Azure network interface using [azurerm_network_interface](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/network_interface) +> * Create an Azure network interface application gateway backend address pool association using [azurerm_network_interface_application_gateway_backend_address_pool_association](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/network_interface_application_gateway_backend_address_pool_association) +> * Create an Azure Windows Virtual Machine using [azurerm_windows_virtual_machine](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/windows_virtual_machine) +> * Create an Azure Virtual Machine Extension using [azurerm_virtual_machine_extension](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_machine_extension) ++## Prerequisites ++- [Install and configure Terraform](/azure/developer/terraform/quickstart-configure) ++## Implement the Terraform code ++> [!NOTE] +> The sample code for this article is located in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/101-application-gateway). You can view the log file containing the [test results from current and previous versions of Terraform](https://github.com/Azure/terraform/tree/master/quickstart/101-application-gateway/TestRecord.md). +> +> See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform) ++1. Create a directory in which to test the sample Terraform code and make it the current directory. ++1. Create a file named `providers.tf` and insert the following code: ++ :::code language="Terraform" source="~/terraform_samples/quickstart/101-application-gateway/providers.tf"::: ++1. Create a file named `main.tf` and insert the following code: ++ :::code language="Terraform" source="~/terraform_samples/quickstart/101-application-gateway/main.tf"::: ++1. Create a file named `variables.tf` and insert the following code: ++ :::code language="Terraform" source="~/terraform_samples/quickstart/101-application-gateway/variables.tf"::: ++1. Create a file named `outputs.tf` and insert the following code: ++ :::code language="Terraform" source="~/terraform_samples/quickstart/101-application-gateway/outputs.tf"::: ++## Initialize Terraform +++## Create a Terraform execution plan +++## Apply a Terraform execution plan +++## Verify the results ++1. When you apply the execution plan, Terraform displays the frontend public IP address. If you've cleared the screen, you can retrieve that value with the following Terraform command: ++ ```console + echo $(terraform output -raw gateway_frontend_ip) + ``` ++1. Paste the public IP address into the address bar of your web browser. Refresh the browser to see the name of the virtual machine. A valid response verifies the application gateway is successfully created and can connect with the backend. ++## Clean up resources +++## Troubleshoot Terraform on Azure ++[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot) ++## Next steps ++> [!div class="nextstepaction"] +> [Learn more about using Application Gateway](/azure/application-gateway/overview) |
azure-app-configuration | Reference Kubernetes Provider | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/reference-kubernetes-provider.md | An `AzureAppConfigurationProvider` resource has the following top-level child pr |Name|Description|Required|Type| |||||-|endpoint|The endpoint of Azure App Configuration, which you would like to retrieve the key-values from|alternative|string| -|connectionStringReference|The name of the Kubernetes Secret that contains Azure App Configuration connection string|alternative|string| -|target|The destination of the retrieved key-values in Kubernetes|true|object| -|auth|The authentication method to access Azure App Configuration|false|object| -|keyValues|The settings for querying and processing key-values|false|object| +|endpoint|The endpoint of Azure App Configuration, which you would like to retrieve the key-values from.|alternative|string| +|connectionStringReference|The name of the Kubernetes Secret that contains Azure App Configuration connection string.|alternative|string| +|target|The destination of the retrieved key-values in Kubernetes.|true|object| +|auth|The authentication method to access Azure App Configuration.|false|object| +|keyValues|The settings for querying and processing key-values.|false|object| The `spec.target` property has the following child property. |Name|Description|Required|Type| |||||-|configMapName|The name of the ConfigMap to be created|true|string| -|configMapData|The setting that specifies how the retrieved data should be populated in the generated ConfigMap|false|object| +|configMapName|The name of the ConfigMap to be created.|true|string| +|configMapData|The setting that specifies how the retrieved data should be populated in the generated ConfigMap.|false|object| If the `spec.target.configMapData` property is not set, the generated ConfigMap will be populated with the list of key-values retrieved from Azure App Configuration, which allows the ConfigMap to be consumed as environment variables. Update this property if you wish to consume the ConfigMap as a mounted file. This property has the following child properties. |Name|Description|Required|Type| |||||-|type|The setting that indicates how the retrieved data is constructed in the generated ConfigMap. The allowed values include `default`, `json`, `yaml` and `properties`|optional|string| -|key|The key name of the retrieved data when the `type` is set to `json`, `yaml` or `properties`. Set it to the file name if the ConfigMap is set up to be consumed as a mounted file|conditional|string| +|type|The setting that indicates how the retrieved data is constructed in the generated ConfigMap. The allowed values include `default`, `json`, `yaml` and `properties`.|optional|string| +|key|The key name of the retrieved data when the `type` is set to `json`, `yaml` or `properties`. Set it to the file name if the ConfigMap is set up to be consumed as a mounted file.|conditional|string| The `spec.auth` property isn't required if the connection string of your App Configuration store is provided by setting the `spec.connectionStringReference` property. Otherwise, one of the identities, service principal, workload identity, or managed identity, will be used for authentication. The `spec.auth` has the following child properties. Only one of them should be specified. If none of them are set, the system-assigned managed identity of the virtual machine scale set will be used. |Name|Description|Required|Type| |||||-|servicePrincipalReference|The name of the Kubernetes Secret that contains the credentials of a service principal|false|string| -|workloadIdentity|The settings for using workload identity|false|object| -|managedIdentityClientId|The Client ID of user-assigned managed identity of virtual machine scale set|false|string| +|servicePrincipalReference|The name of the Kubernetes Secret that contains the credentials of a service principal.|false|string| +|workloadIdentity|The settings for using workload identity.|false|object| +|managedIdentityClientId|The Client ID of user-assigned managed identity of virtual machine scale set.|false|string| The `spec.auth.workloadIdentity` property has the following child property. |Name|Description|Required|Type| |||||-|managedIdentityClientId|The Client ID of the user-assigned managed identity associated with the workload identity|true|string| +|managedIdentityClientId|The Client ID of the user-assigned managed identity associated with the workload identity.|true|string| The `spec.keyValues` has the following child properties. The `spec.keyValues.keyVaults` property is required if any Key Vault references are expected to be downloaded. |Name|Description|Required|Type| |||||-|selectors|The list of selectors for key-value filtering|false|object array| -|trimKeyPrefixes|The list of key prefixes to be trimmed|false|string array| -|keyVaults|The settings for Key Vault references|conditional|object| -|refresh|The settings for refreshing the key-values in ConfigMap or Secret|false|object| +|selectors|The list of selectors for key-value filtering.|false|object array| +|trimKeyPrefixes|The list of key prefixes to be trimmed.|false|string array| +|refresh|The settings for refreshing data from Azure App Configuration. If the property is absent, data from Azure App Configuration will not be refreshed.|false|object| +|keyVaults|The settings for Key Vault references.|conditional|object| If the `spec.keyValues.selectors` property isn't set, all key-values with no label will be downloaded. It contains an array of *selector* objects, which have the following child properties. |Name|Description|Required|Type| |||||-|keyFilter|The key filter for querying key-values|true|string| -|labelFilter|The label filter for querying key-values|false|string| +|keyFilter|The key filter for querying key-values.|true|string| +|labelFilter|The label filter for querying key-values.|false|string| -The `spec.keyValues.keyVaults` property has the following child properties. +The `spec.keyValues.refresh` property has the following child properties. |Name|Description|Required|Type| |||||-|target|The destination of resolved Key Vault references in Kubernetes|true|object| -|auth|The authentication method to access Key Vaults|false|object| +|monitoring|The key-values monitored for change detection, aka sentinel keys. The data from Azure App Configuration will be refreshed only if at least one of the monitored key-values is changed.|true|object| +|interval|The interval at which the data will be refreshed from Azure App Configuration. It must be greater than or equal to 1 second. If the property is absent, a default value of 30 seconds will be used.|false|duration string| -The `spec.keyValues.keyVaults.target` property has the following child property. +The `spec.keyValues.refresh.monitoring.keyValues` is an array of objects, which have the following child properties. |Name|Description|Required|Type| |||||-|secretName|The name of the Kubernetes Secret to be created|true|string| +|key|The key of a key-value.|true|string| +|label|The label of a key-value.|false|string| -If the `spec.keyValues.keyVaults.auth` property isn't set, the system-assigned managed identity is used. It has the following child properties. +The `spec.keyValues.keyVaults` property has the following child properties. |Name|Description|Required|Type| |||||-|servicePrincipalReference|The name of the Kubernetes Secret that contains the credentials of a service principal used for authentication with vaults that don't have individual authentication methods specified|false|string| -|workloadIdentity|The settings of the workload identity used for authentication with vaults that don't have individual authentication methods specified. It has the same child properties as `spec.auth.workloadIdentity`|false|object| -|managedIdentityClientId|The client ID of a user-assigned managed identity of virtual machine scale set used for authentication with vaults that don't have individual authentication methods specified|false|string| -|vaults|The authentication methods for individual vaults|false|object array| +|target|The destination of the retrieved secrets in Kubernetes.|true|object| +|auth|The authentication method to access Key Vaults.|false|object| +|refresh|The settings for refreshing data from Key Vaults. If the property is absent, data from Key Vaults will not be refreshed unless the corresponding Key Vault references are reloaded.|false|object| -The authentication method of each *vault* can be specified with the following properties. One of `managedIdentityClientId`, `servicePrincipalReference` or `workloadIdentity` must be provided. +The `spec.keyValues.keyVaults.target` property has the following child property. |Name|Description|Required|Type| |||||-|uri|The URI of a vault|true|string| -|servicePrincipalReference|The name of the Kubernetes Secret that contains the credentials of a service principal used for authentication with a vault|false|string| -|workloadIdentity|The settings of the workload identity used for authentication with a vault. It has the same child properties as `spec.auth.workloadIdentity`|false|object| -|managedIdentityClientId|The client ID of a user-assigned managed identity of virtual machine scale set used for authentication with a vault|false|string| +|secretName|The name of the Kubernetes Secret to be created.|true|string| -The `spec.keyValues.refresh` property has the following child properties. +If the `spec.keyValues.keyVaults.auth` property isn't set, the system-assigned managed identity is used. It has the following child properties. |Name|Description|Required|Type| |||||-|monitoring|The key-values that are monitored by the provider, provider automatically refreshes the ConfigMap or Secret if value change in any designated key-value|true|object| -|interval|The interval for refreshing, default value is 30 seconds, must be greater than 1 second|false|duration string| +|servicePrincipalReference|The name of the Kubernetes Secret that contains the credentials of a service principal used for authentication with vaults that don't have individual authentication methods specified.|false|string| +|workloadIdentity|The settings of the workload identity used for authentication with vaults that don't have individual authentication methods specified. It has the same child properties as `spec.auth.workloadIdentity`.|false|object| +|managedIdentityClientId|The client ID of a user-assigned managed identity of virtual machine scale set used for authentication with vaults that don't have individual authentication methods specified.|false|string| +|vaults|The authentication methods for individual vaults.|false|object array| -The `spec.keyValues.refresh.monitoring.keyValues` is an array of objects, which have the following child properties. +The authentication method of each *vault* can be specified with the following properties. One of `managedIdentityClientId`, `servicePrincipalReference` or `workloadIdentity` must be provided. ++|Name|Description|Required|Type| +||||| +|uri|The URI of a vault.|true|string| +|servicePrincipalReference|The name of the Kubernetes Secret that contains the credentials of a service principal used for authentication with a vault.|false|string| +|workloadIdentity|The settings of the workload identity used for authentication with a vault. It has the same child properties as `spec.auth.workloadIdentity`.|false|object| +|managedIdentityClientId|The client ID of a user-assigned managed identity of virtual machine scale set used for authentication with a vault.|false|string| ++The `spec.keyValues.keyVaults.refresh` property has the following child property. |Name|Description|Required|Type| |||||-|key|The key of a key-value|true|string| -|label|The label of a key-value|false|string| +|interval|The interval at which the data will be refreshed from Key Vault. It must be greater than or equal to 1 minute. The Key Vault refresh is independent of the App Configuration refresh configured via `spec.keyValues.refresh`.|true|duration string| ## Examples spec: trimKeyPrefixes: [prefix1, prefix2] ``` +### Configuration refresh ++When you make changes to your data in Azure App Configuration, you might want those changes to be refreshed automatically in your Kubernetes cluster. It's common to update multiple key-values, but you don't want the cluster to pick up a change midway through the update. To maintain configuration consistency, you can use a key-value to signal the completion of your update. This key-value is known as the sentinel key. The Kubernetes provider can monitor this key-value, and the ConfigMap and Secret will only be regenerated with updated data once a change is detected in the sentinel key. ++In the following sample, a key-value named `app1_sentinel` is polled every minute, and the configuration is refreshed whenever changes are detected in the sentinel key. ++``` yaml +apiVersion: azconfig.io/v1beta1 +kind: AzureAppConfigurationProvider +metadata: + name: appconfigurationprovider-sample +spec: + endpoint: <your-app-configuration-store-endpoint> + target: + configMapName: configmap-created-by-appconfig-provider + keyValues: + selectors: + - keyFilter: app1* + labelFilter: common + refresh: + interval: 1m + monitoring: + keyValues: + - key: app1_sentinel + label: common +``` + ### Key Vault references -The following sample instructs using a service principal to authenticate with a specific vault and a user-assigned managed identity for all other vaults. +In the following sample, one Key Vault is authenticated with a service principal, while all other Key Vaults are authenticated with a user-assigned managed identity. ``` yaml apiVersion: azconfig.io/v1beta1 spec: servicePrincipalReference: <name-of-secret-containing-service-principal-credentials> ``` -### Dynamically refresh ConfigMap and Secret +### Refresh of secrets from Key Vault -Setting the `spec.keyValues.refresh` property enables dynamic configuration data refresh in ConfigMap and Secret by monitoring designated key-values. The provider periodically polls the key-values, if there is any value change, provider triggers ConfigMap and Secret refresh in accordance with the present data in Azure App Configuration. +Refreshing secrets from Key Vaults usually requires reloading the corresponding Key Vault references from Azure App Configuration. However, with the `spec.keyValues.keyVaults.refresh` property, you can refresh the secrets from Key Vault independently. This is especially useful for ensuring that your workload automatically picks up any updated secrets from Key Vault during secret rotation. Note that to load the latest version of a secret, the Key Vault reference must not be a versioned secret. -The following sample instructs monitoring two key-values with 1 minute polling interval. +The following sample refreshes all non-versioned secrets from Key Vault every hour. ``` yaml apiVersion: azconfig.io/v1beta1 spec: selectors: - keyFilter: app1* labelFilter: common- - keyFilter: app1* - labelFilter: development - refresh: - interval: 1m - monitoring: - keyValues: - - key: sentinelKey - label: common - - key: sentinelKey - label: development + keyVaults: + target: + secretName: secret-created-by-appconfig-provider + auth: + managedIdentityClientId: <your-user-assigned-managed-identity-client-id> + refresh: + interval: 1h ``` -### Consume ConfigMap +### ConfigMap Consumption Applications running in Kubernetes typically consume the ConfigMap either as environment variables or as configuration files. If the `configMapData.type` property is absent or is set to default, the ConfigMap is populated with the itemized list of data retrieved from Azure App Configuration, which can be easily consumed as environment variables. If the `configMapData.type` property is set to json, yaml or properties, data retrieved from Azure App Configuration is grouped into one item with key name specified by the `configMapData.key` property in the generated ConfigMap, which can be consumed as a mounted file. |
azure-arc | Extensions Release | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions-release.md | Title: "Available extensions for Azure Arc-enabled Kubernetes clusters" Previously updated : 08/31/2023 Last updated : 09/26/2023 description: "See which extensions are currently available for Azure Arc-enabled Kubernetes clusters and view release notes." The currently supported versions of the `microsoft.flux` extension are described > [!IMPORTANT] > Eventually, a major version update (v2.x.x) for the `microsoft.flux` extension will be released. When this happens, clusters won't be auto-upgraded to this version, since [auto-upgrade is only supported for minor version releases](extensions.md#upgrade-extension-instance). If you're still using an older API version when the next major version is released, you'll need to update your manifests to the latest API versions, perform any necessary testing, then upgrade your extension manually. For more information about the new API versions (breaking changes) and how to update your manifests, see the [Flux v2 release notes](https://github.com/fluxcd/flux2/releases/tag/v2.0.0). +### 1.7.7 (September 2023) ++> [!NOTE] +> We have started to roll out this release across regions. We'll remove this note once version 1.7.6 is available to all supported regions. ++Flux version: [Release v2.0.1](https://github.com/fluxcd/flux2/releases/tag/v2.0.1) ++- source-controller: v1.0.1 +- kustomize-controller: v1.0.1 +- helm-controller: v0.35.0 +- notification-controller: v1.0.0 +- image-automation-controller: v0.35.0 +- image-reflector-controller: v0.29.1 ++Changes made for this version: ++- Updated SSH key entry to use the [updated RSA SSH host key](https://bitbucket.org/blog/ssh-host-key-changes) to prevent failures in configurations with `ssh` authentication type for Bitbucket. + ### 1.7.6 (August 2023) Flux version: [Release v2.0.1](https://github.com/fluxcd/flux2/releases/tag/v2.0.1) |
azure-arc | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/overview.md | Title: Azure Arc resource bridge (preview) overview description: Learn how to use Azure Arc resource bridge (preview) to support VM self-servicing on Azure Stack HCI, VMware, and System Center Virtual Machine Manager. Last updated 02/15/2023 -+ # What is Azure Arc resource bridge (preview)? In order to use Arc resource bridge in a region, Arc resource bridge and the arc Arc resource bridge supports the following Azure regions: * East US-* East US2 -* West US2 -* West US3 +* East US 2 +* West US 2 +* West US 3 +* Central US + * South Central US * West Europe * North Europe * UK South+* Sweden Central + * Canada Central * Australia East * Southeast Asia Arc resource bridge communicates outbound securely to Azure Arc over TCP port 44 * Learn more about [how Azure Arc-enabled VMware vSphere extends Azure's governance and management capabilities to VMware vSphere infrastructure](../vmware-vsphere/overview.md). * Learn more about [provisioning and managing on-premises Windows and Linux VMs running on Azure Stack HCI clusters](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines). * Review the [system requirements](system-requirements.md) for deploying and managing Arc resource bridge.++ |
azure-arc | Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md | Title: Connected Machine agent prerequisites description: Learn about the prerequisites for installing the Connected Machine agent for Azure Arc-enabled servers. Previously updated : 07/11/2023 Last updated : 09/26/2023 If two agents use the same configuration, you will encounter inconsistent behavi Azure Arc supports the following Windows and Linux operating systems. Only x86-64 (64-bit) architectures are supported. The Azure Connected Machine agent does not run on x86 (32-bit) or ARM-based architectures. -* Windows Server 2008 R2 SP1, 2012, 2012 R2, 2016, 2019, and 2022 - * Both Desktop and Server Core experiences are supported - * Azure Editions are supported on Azure Stack HCI -* Windows 10, 11 (see [client operating system guidance](#client-operating-system-guidance)) -* Windows IoT Enterprise -* Azure Stack HCI +* Amazon Linux 2 and 2023 * Azure Linux (CBL-Mariner) 1.0, 2.0-* Ubuntu 16.04, 18.04, 20.04, and 22.04 LTS -* Debian 10, 11, and 12 +* Azure Stack HCI * CentOS Linux 7 and 8+* Debian 10, 11, and 12 +* Oracle Linux 7 and 8 +* Red Hat Enterprise Linux (RHEL) 7, 8 and 9 * Rocky Linux 8 * SUSE Linux Enterprise Server (SLES) 12 SP3-SP5 and 15-* Red Hat Enterprise Linux (RHEL) 7, 8 and 9 -* Amazon Linux 2 and 2023 -* Oracle Linux 7 and 8 +* Ubuntu 16.04, 18.04, 20.04, and 22.04 LTS +* Windows 10, 11 (see [client operating system guidance](#client-operating-system-guidance)) +* Windows IoT Enterprise +* Windows Server 2008 R2 SP1, 2012, 2012 R2, 2016, 2019, and 2022 + * Both Desktop and Server Core experiences are supported + * Azure Editions are supported on Azure Stack HCI ++The Azure Connected Machine agent can't currently be installed on systems hardened by the Center for Information Security (CIS) Benchmark. ### Client operating system guidance Microsoft doesn't recommend running Azure Arc on short-lived (ephemeral) servers Windows operating systems: -* NET Framework 4.6 or later. [Download the .NET Framework](/dotnet/framework/install/guide-for-developers). -* Windows PowerShell 4.0 or later (already included with Windows Server 2012 R2 and later). For Windows Server 2008 R2 SP1, [Download Windows Management Framework 5.1.](https://www.microsoft.com/download/details.aspx?id=54616). +* Windows Server 2008 R2 SP1 requires PowerShell 4.0 or later. Microsoft recommends running the latest version, [Windows Management Framework 5.1](https://www.microsoft.com/download/details.aspx?id=54616). Linux operating systems: * systemd * wget (to download the installation script) * openssl-* gnupg +* gnupg (Debian-based systems, only) ## Required permissions |
azure-functions | Create First Function Cli Node | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-node.md | Before you begin, you must have the following prerequisites: + The Azure [Az PowerShell module](/powershell/azure/install-azure-powershell) version 5.9.0 or later. ::: zone pivot="nodejs-model-v3" -+ [Node.js](https://nodejs.org/) version 20 (preview), 18 or 16. ++ [Node.js](https://nodejs.org/) version 14 or above. ::: zone-end ::: zone pivot="nodejs-model-v4" Before you begin, you must have the following prerequisites: [!INCLUDE [functions-install-core-tools](../../includes/functions-install-core-tools.md)] ::: zone pivot="nodejs-model-v4" -+ Make sure you install version v4.0.5095 of the Core Tools, or a later version. ++ Make sure you install version v4.0.5382 of the Core Tools, or a later version. ::: zone-end ## Create a local function project Each binding requires a direction, a type, and a unique name. The HTTP trigger h { "Values": { "AzureWebJobsStorage": "<Azure Storage connection information>",- "FUNCTIONS_WORKER_RUNTIME": "node", - "AzureWebJobsFeatureFlags": "EnableWorkerIndexing" + "FUNCTIONS_WORKER_RUNTIME": "node" } } ``` Each binding requires a direction, a type, and a unique name. The HTTP trigger h This command creates a function app running in your specified language runtime under the [Azure Functions Consumption Plan](consumption-plan.md), which is free for the amount of usage you incur here. The command also creates an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it. -## Update app settings --To enable your V4 programming model app to run in Azure, you need to add a new application setting named `AzureWebJobsFeatureFlags` with a value of `EnableWorkerIndexing`. This setting is already in your local.settings.json file. --Run the following command to add this setting to your new function app in Azure. Replace `<FUNCTION_APP_NAME>` and `<RESOURCE_GROUP_NAME>` with the name of your function app and resource group, respectively. --# [Azure CLI](#tab/azure-cli) --```azurecli -az functionapp config appsettings set --name <FUNCTION_APP_NAME> --resource-group <RESOURCE_GROUP_NAME> --settings AzureWebJobsFeatureFlags=EnableWorkerIndexing -``` --# [Azure PowerShell](#tab/azure-powershell) --```azurepowershell -Update-AzFunctionAppSetting -Name <FUNCTION_APP_NAME> -ResourceGroupName <RESOURCE_GROUP_NAME> -AppSetting @{"AzureWebJobsFeatureFlags" = "EnableWorkerIndexing"} -``` --- [!INCLUDE [functions-publish-project-cli](../../includes/functions-publish-project-cli.md)] [!INCLUDE [functions-run-remote-azure-cli](../../includes/functions-run-remote-azure-cli.md)] |
azure-functions | Create First Function Cli Typescript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-typescript.md | Before you begin, you must have the following prerequisites: + The Azure [Az PowerShell module](/powershell/azure/install-azure-powershell) version 5.9.0 or later. ::: zone pivot="nodejs-model-v3" -+ [Node.js](https://nodejs.org/) version 18 or 16. ++ [Node.js](https://nodejs.org/) version 14 or above. ::: zone-end ::: zone pivot="nodejs-model-v4" + [Node.js](https://nodejs.org/) version 18 or above. Before you begin, you must have the following prerequisites: [!INCLUDE [functions-install-core-tools](../../includes/functions-install-core-tools.md)] ::: zone pivot="nodejs-model-v4" -+ Make sure you install version v4.0.5095 of the Core Tools, or a later version. ++ Make sure you install version v4.0.5382 of the Core Tools, or a later version. ::: zone-end ## Create a local function project Each binding requires a direction, a type, and a unique name. The HTTP trigger h { "Values": { "AzureWebJobsStorage": "<Azure Storage connection information>",- "FUNCTIONS_WORKER_RUNTIME": "node", - "AzureWebJobsFeatureFlags": "EnableWorkerIndexing" + "FUNCTIONS_WORKER_RUNTIME": "node" } } ``` Each binding requires a direction, a type, and a unique name. The HTTP trigger h This command creates a function app running in your specified language runtime under the [Azure Functions Consumption Plan](consumption-plan.md), which is free for the amount of usage you incur here. The command also creates an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it. -## Update app settings --To enable your V4 programming model app to run in Azure, you need to add a new application setting named `AzureWebJobsFeatureFlags` with a value of `EnableWorkerIndexing`. This setting is already in your local.settings.json file. --Run the following command to add this setting to your new function app in Azure. Replace `<FUNCTION_APP_NAME>` and `<RESOURCE_GROUP_NAME>` with the name of your function app and resource group, respectively. --# [Azure CLI](#tab/azure-cli) --```azurecli -az functionapp config appsettings set --name <FUNCTION_APP_NAME> --resource-group <RESOURCE_GROUP_NAME> --settings AzureWebJobsFeatureFlags=EnableWorkerIndexing -``` --# [Azure PowerShell](#tab/azure-powershell) --```azurepowershell -Update-AzFunctionAppSetting -Name <FUNCTION_APP_NAME> -ResourceGroupName <RESOURCE_GROUP_NAME> -AppSetting @{"AzureWebJobsFeatureFlags" = "EnableWorkerIndexing"} -``` --- ## Deploy the function project to Azure Before you use Core Tools to deploy your project to Azure, you create a production-ready build of JavaScript files from the TypeScript source files. |
azure-functions | Create First Function Vs Code Node | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-node.md | In this section, you use Visual Studio Code to create a local Azure Functions pr |Prompt|Selection| |--|--| |**Select a language for your function project**|Choose `JavaScript`.|- |**Select a JavaScript programming model**|Choose `Model V4 (Preview)`| + |**Select a JavaScript programming model**|Choose `Model V4`| |**Select a template for your project's first function**|Choose `HTTP trigger`.| |**Provide a function name**|Type `HttpExample`.| |**Select how you would like to open your project**|Choose `Open in current window`| After you've verified that the function runs correctly on your local computer, i [!INCLUDE [functions-create-azure-resources-vs-code](../../includes/functions-create-azure-resources-vs-code.md)] -## Update app settings --To enable your V4 programming model app to run in Azure, you need to add a new application setting named `AzureWebJobsFeatureFlags` with a value of `EnableWorkerIndexing`. This setting is already in your local.settings.json file. --1. In Visual Studio Code, press <kbd>F1</kbd> to open the command palette. In the command palette, search for and select `Azure Functions: Add New Setting...`. --1. Choose your new function app, type `AzureWebJobsFeatureFlags` for the new app setting name, and press <kbd>Enter</kbd>. --1. For the value, type `EnableWorkerIndexing` and press <kbd>Enter</kbd>. - ## Deploy the project to Azure [!INCLUDE [functions-deploy-project-vs-code](../../includes/functions-deploy-project-vs-code.md)] |
azure-functions | Create First Function Vs Code Typescript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-typescript.md | Before you get started, make sure you have the following requirements in place: + [Azure Functions Core Tools 4.x](functions-run-local.md#install-the-azure-functions-core-tools). ::: zone-end ::: zone pivot="nodejs-model-v4" -+ [Azure Functions Core Tools v4.0.5095 or above](functions-run-local.md#install-the-azure-functions-core-tools). ++ [Azure Functions Core Tools v4.0.5382 or above](functions-run-local.md#install-the-azure-functions-core-tools). ::: zone-end ## <a name="create-an-azure-functions-project"></a>Create your local project In this section, you use Visual Studio Code to create a local Azure Functions pr |Prompt|Selection| |--|--| |**Select a language for your function project**|Choose `TypeScript`.|- |**Select a TypeScript programming model**|Choose `Model V4 (Preview)`| + |**Select a TypeScript programming model**|Choose `Model V4`| |**Select a template for your project's first function**|Choose `HTTP trigger`.| |**Provide a function name**|Type `HttpExample`.| |**Select how you would like to open your project**|Choose `Open in current window`| After you've verified that the function runs correctly on your local computer, i [!INCLUDE [functions-create-azure-resources-vs-code](../../includes/functions-create-azure-resources-vs-code.md)] -## Update app settings --To enable your V4 programming model app to run in Azure, you need to add a new application setting named `AzureWebJobsFeatureFlags` with a value of `EnableWorkerIndexing`. This setting is already in your local.settings.json file. --1. In Visual Studio Code, press <kbd>F1</kbd> to open the command palette. In the command palette, search for and select `Azure Functions: Add New Setting...`. --1. Choose your new function app, type `AzureWebJobsFeatureFlags` for the new app setting name, and press <kbd>Enter</kbd>. --1. For the value, type `EnableWorkerIndexing` and press <kbd>Enter</kbd>. - ## Deploy the project to Azure [!INCLUDE [functions-deploy-project-vs-code](../../includes/functions-deploy-project-vs-code.md)] |
azure-functions | Durable Functions Node Model Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-node-model-upgrade.md | Title: Upgrade your Durable Functions app to version 4 of the Node.js programming model + Title: Migrate your Durable Functions app to version 4 of the Node.js programming model description: This article shows you how to upgrade your existing Durable Functions apps running on v3 of the Node.js programming model to v4. -# Upgrade your Durable Functions app to version 4 of the Node.js programming model -->[!NOTE] -> Version 4 of the Node.js programming model is currently in public preview. Learn more by visiting the Node [Functions developer guide](../functions-reference-node.md?pivots=nodejs-model-v4). +# Migrate your Durable Functions app to version 4 of the Node.js programming model This article provides a guide to upgrade your existing Durable Functions app to version 4 of the Node.js programming model. Note that this article uses "TIP" banners to summarize the key steps needed to upgrade your app. Before following this guide, make sure you follow these steps first: - Install [Node.js](https://nodejs.org/en/download/releases) version 18.x+. - Install [TypeScript](https://www.typescriptlang.org/) version 4.x+.-- Run your app on [Azure Functions Runtime](../functions-versions.md?tabs=v4&pivots=programming-language-javascript) version 4.16.5+.-- Install [Azure Functions Core Tools](../functions-run-local.md?tabs=v4) version 4.0.5095+.+- Run your app on [Azure Functions Runtime](../functions-versions.md?tabs=v4&pivots=programming-language-javascript) version 4.25+. +- Install [Azure Functions Core Tools](../functions-run-local.md?tabs=v4) version 4.0.5382+. - Review the general [Azure Functions Node.js programming model v4 upgrade guide](../functions-node-upgrade-v4.md). ## Upgrade the `durable-functions` npm package Before following this guide, make sure you follow these steps first: >[!NOTE] >The programming model version should not be confused with the `durable-functions` package version. `durable-functions` package version 3.x is required for the v4 programming model, while `durable-functions` version 2.x is required for the v3 programming model. -The v4 programming model is supported by the v3.x of the `durable-functions` npm package. In your programming model v3 app, you likely had `durable-functions` v2.x listed in your dependencies. Make sure to update to the (currently in preview) v3.x of the `durable-functions` package. +The v4 programming model is supported by the v3.x of the `durable-functions` npm package. In your programming model v3 app, you likely had `durable-functions` v2.x listed in your dependencies. Make sure to update to the v3.x of the `durable-functions` package. >[!TIP]-> Upgrade to the preview v3.x of the `durable-functions` npm package. You can do this with the following command: +> Upgrade to v3.x of the `durable-functions` npm package. You can do this with the following command: > > ```bash-> npm install durable-functions@preview +> npm install durable-functions > ``` ## Register your Durable Functions Triggers In the v4 programming model, declaring triggers and bindings in a separate `func **Migrating an orchestration** :::zone pivot="programming-language-javascript"-# [v4 model](#tab/v4) +# [Model v4](#tab/nodejs-v4) ```javascript const df = require('durable-functions'); df.app.orchestration('durableOrchestrator', function* (context) { }); ``` -# [v3 model](#tab/v3) +# [Model v3](#tab/nodejs-v3) ```javascript const df = require("durable-functions"); module.exports = df.orchestrator(function* (context) { :::zone-end :::zone pivot="programming-language-typescript"-# [v4 model](#tab/v4) +# [Model v4](#tab/nodejs-v4) ```typescript import * as df from 'durable-functions'; const durableHello1Orchestrator: OrchestrationHandler = function* (context: Orch df.app.orchestration('durableOrchestrator', durableHello1Orchestrator); ``` -# [v3 model](#tab/v3) +# [Model v3](#tab/nodejs-v3) ```typescript import * as df from "durable-functions" export default orchestrator; :::zone pivot="programming-language-javascript" -# [v4 model](#tab/v4) +# [Model v4](#tab/nodejs-v4) ```javascript const df = require('durable-functions'); df.app.entity('Counter', (context) => { }); ``` -# [v3 model](#tab/v3) +# [Model v3](#tab/nodejs-v3) ```javascript const df = require("durable-functions"); module.exports = df.entity(function (context) { :::zone pivot="programming-language-typescript" -# [v4 model](#tab/v4) +# [Model v4](#tab/nodejs-v4) ```typescript import * as df from 'durable-functions'; const counterEntity: EntityHandler<number> = (context: EntityContext<number>) => df.app.entity('Counter', counterEntity); ``` -# [v3 model](#tab/v3) +# [Model v3](#tab/nodejs-v3) ```typescript import * as df from "durable-functions" export default entity; :::zone pivot="programming-language-javascript" -# [v4 model](#tab/v4) +# [Model v4](#tab/nodejs-v4) ```javascript const df = require('durable-functions'); df.app.activity('hello', { }); ``` -# [v3 model](#tab/v3) +# [Model v3](#tab/nodejs-v3) ```javascript module.exports = async function (context) { module.exports = async function (context) { :::zone-end :::zone pivot="programming-language-typescript"-# [v4 model](#tab/v4) +# [Model v4](#tab/nodejs-v4) ```typescript import * as df from 'durable-functions'; const helloActivity: ActivityHandler = (input: string): string => { df.app.activity('hello', { handler: helloActivity }); ``` -# [v3 model](#tab/v3) +# [Model v3](#tab/nodejs-v3) ```typescript import { AzureFunction, Context } from "@azure/functions" In the v4 model, registering secondary input bindings, like durable clients, is :::zone pivot="programming-language-javascript" -# [v4 model](#tab/v4) +# [Model v4](#tab/nodejs-v4) ```javascript const { app } = require('@azure/functions'); app.http('durableHttpStart', { }); ``` -# [v3 model](#tab/v3) +# [Model v3](#tab/nodejs-v3) ```javascript const df = require("durable-functions"); module.exports = async function (context, req) { :::zone pivot="programming-language-typescript" -# [v4 model](#tab/v4) +# [Model v4](#tab/nodejs-v4) ```typescript import { app, HttpHandler, HttpRequest, HttpResponse, InvocationContext } from '@azure/functions'; app.http('durableHttpStart', { }); ``` -# [v3 model](#tab/v3) +# [Model v3](#tab/nodejs-v3) ```typescript import * as df from "durable-functions" In `v3.x` of `durable-functions`, multiple APIs on the `DurableClient` class (re :::zone pivot="programming-language-javascript" -# [v4 model](#tab/v4) +# [Model v4](#tab/nodejs-v4) ```javascript const client = df.getClient(context) const status = await client.getStatus('instanceId', { }); ``` -# [v3 model](#tab/v3) +# [Model v3](#tab/nodejs-v3) ```javascript const client = df.getClient(context); const status = await client.getStatus('instanceId', false, false, true); :::zone pivot="programming-language-typescript" -# [v4 model](#tab/v4) +# [Model v4](#tab/nodejs-v4) ```typescript const client: DurableClient = df.getClient(context); const status: DurableOrchestrationStatus = await client.getStatus('instanceId', }); ``` -# [v3 model](#tab/v3) +# [Model v3](#tab/nodejs-v3) ```typescript const client: DurableOrchestrationClient = df.getClient(context); If your orchestrations used the `callHttp` API, make sure to update these API ca :::zone pivot="programming-language-javascript" -# [v4 model](#tab/v4) +# [Model v4](#tab/nodejs-v4) ```javascript const restartResponse = yield context.df.callHttp({ const restartResponse = yield context.df.callHttp({ }); ``` -# [v3 model](#tab/v3) +# [Model v3](#tab/nodejs-v3) ```javascript const response = yield context.df.callHttp( const response = yield context.df.callHttp( :::zone pivot="programming-language-typescript" -# [v4 model](#tab/v4) +# [Model v4](#tab/nodejs-v4) ```typescript const restartResponse = yield context.df.callHttp({ const restartResponse = yield context.df.callHttp({ }); ``` -# [v3 model](#tab/v3) +# [Model v3](#tab/nodejs-v3) ```javascript const response = yield context.df.callHttp( Below are some of the new exported types: ## Troubleshooting -If you see the following error when running your orchestration code, make sure you are running on at least `v4.16.5` of the [Azure Functions Runtime](../functions-versions.md?tabs=v4&pivots=programming-language-javascript) or at least `v4.0.5095` of [Azure Functions Core Tools](../functions-run-local.md?tabs=v4) if running locally. +If you see the following error when running your orchestration code, make sure you are running on at least `v4.25` of the [Azure Functions Runtime](../functions-versions.md?tabs=v4&pivots=programming-language-javascript) or at least `v4.0.5382` of [Azure Functions Core Tools](../functions-run-local.md?tabs=v4) if running locally. ```bash Exception: The orchestrator can not execute without an OrchestratorStarted event. |
azure-functions | Durable Functions Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-overview.md | Durable Functions is designed to work with all Azure Functions programming langu | - | - | - | - | | .NET / C# / F# | Functions 1.0+ | In-process <br/> Out-of-process | n/a | | JavaScript/TypeScript (V3 prog. model) | Functions 2.0+ | Node 8+ | 2.x bundles |-| JavaScript/TypeScript (V4 prog. model) | Functions 4.16.5+ | Node 18+ | 3.15+ bundles | +| JavaScript/TypeScript (V4 prog. model) | Functions 4.25+ | Node 18+ | 3.15+ bundles | | Python | Functions 2.0+ | Python 3.7+ | 2.x bundles | | Python (V2 prog. model) | Functions 4.0+ | Python 3.7+ | 3.15+ bundles | | PowerShell | Functions 3.0+ | PowerShell 7+ | 2.x bundles | | Java | Functions 4.0+ | Java 8+ | 4.x bundles | ::: zone pivot="javascript"-> [!NOTE] -> The new programming model for authoring Functions in Node.js (V4) is currently in preview. Compared to the current model, the new experience is designed to be more flexible and intuitive for JavaScript/TypeScript developers. Learn more about the differences between the models in the [Node.js upgrade guide](../functions-node-upgrade-v4.md). - ::: zone-end - Like Azure Functions, there are templates to help you develop Durable Functions using [Visual Studio](durable-functions-create-first-csharp.md), [Visual Studio Code](quickstart-js-vscode.md), and the [Azure portal](durable-functions-create-portal.md). ## Application patterns You can use the `context` parameter to invoke other functions by name, pass para ::: zone-end ::: zone pivot="javascript" -# [V3 model](#tab/v3-model) +# [Model v3](#tab/nodejs-v3) ```javascript const df = require("durable-functions"); You can use the `context.df` object to invoke other functions by name, pass para > [!NOTE] > The `context` object in JavaScript represents the entire [function context](../functions-reference-node.md#context-object). Access the Durable Functions context using the `df` property on the main context. -# [V4 model](#tab/v4-model) +# [Model v4](#tab/nodejs-v4) ```javascript const df = require("durable-functions"); The automatic checkpointing that happens at the `await` call on `Task.WhenAll` e ::: zone-end ::: zone pivot="javascript" -# [V3 model](#tab/v3-model) +# [Model v3](#tab/nodejs-v3) ```javascript const df = require("durable-functions"); The fan-out work is distributed to multiple instances of the `F2` function. The The automatic checkpointing that happens at the `yield` call on `context.df.Task.all` ensures that a potential midway crash or reboot doesn't require restarting an already completed task. -# [V4 model](#tab/v4-model) +# [Model v4](#tab/nodejs-v4) ```javascript const df = require("durable-functions"); public static async Task Run( ::: zone-end ::: zone pivot="javascript" -# [V3 model](#tab/v3-model) +# [Model v3](#tab/nodejs-v3) ```javascript const df = require("durable-functions"); module.exports = df.orchestrator(function*(context) { }); ``` -# [V4 model](#tab/v4-model) +# [Model v4](#tab/nodejs-v4) ```javascript const df = require("durable-functions"); To create the durable timer, call `context.CreateTimer`. The notification is rec ::: zone-end ::: zone pivot="javascript" -# [V3 model](#tab/v3-model) +# [Model v3](#tab/nodejs-v3) ```javascript const df = require("durable-functions"); module.exports = df.orchestrator(function*(context) { To create the durable timer, call `context.df.createTimer`. The notification is received by `context.df.waitForExternalEvent`. Then, `context.df.Task.any` is called to decide whether to escalate (timeout happens first) or process the approval (the approval is received before timeout). -# [V4 model](#tab/v4-model) +# [Model v4](#tab/nodejs-v4) ```javascript const df = require("durable-functions"); public static async Task Run( ::: zone-end ::: zone pivot="javascript" -# [V3 model](#tab/v3-model) +# [Model v3](#tab/nodejs-v3) ```javascript const df = require("durable-functions"); module.exports = async function (context) { }; ``` -# [V4 model](#tab/v4-model) +# [Model v4](#tab/nodejs-v4) ```javascript const df = require("durable-functions"); Durable entities are currently not supported in the .NET-isolated worker. ::: zone-end ::: zone pivot="javascript" -# [V3 model](#tab/v3-model) +# [Model v3](#tab/nodejs-v3) ```javascript const df = require("durable-functions"); module.exports = df.entity(function(context) { }); ``` -# [V4 model](#tab/v4-model) +# [Model v4](#tab/nodejs-v4) ```javascript const df = require("durable-functions"); Durable entities are currently not supported in the .NET-isolated worker. ::: zone-end ::: zone pivot="javascript" -# [V3 model](#tab/v3-model) +# [Model v3](#tab/nodejs-v3) ```javascript const df = require("durable-functions"); module.exports = async function (context) { }; ``` -# [V4 model](#tab/v4-model) +# [Model v4](#tab/nodejs-v4) ```javascript const df = require("durable-functions"); |
azure-functions | Quickstart Js Vscode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-js-vscode.md | To complete this tutorial: * Make sure you have the latest version of the [Azure Functions Core Tools](../functions-run-local.md). ::: zone-end ::: zone pivot="nodejs-model-v4"-* Make sure you have [Azure Functions Core Tools](../functions-run-local.md) version `v4.0.5095` or above. +* Make sure you have [Azure Functions Core Tools](../functions-run-local.md) version `v4.0.5382` or above. ::: zone-end * Durable Functions require an Azure storage account. You need an Azure subscription. In this section, you use Visual Studio Code to create a local Azure Functions pr | Prompt | Value | Description | | | -- | -- | | Select a language for your function app project | JavaScript | Create a local Node.js Functions project. |- | Select a JavaScript programming model | Model V4 (Preview) | Choose the V4 programming model (in preview). | + | Select a JavaScript programming model | Model V4 | Choose the V4 programming model. | | Select a version | Azure Functions v4 | You only see this option when the Core Tools aren't already installed. In this case, Core Tools are installed the first time you run the app. | | Select a template for your project's first function | Skip for now | | | Select how you would like to open your project | Open in current window | Reopens VS Code in the folder you selected. | You now have a Durable Functions app that can be run locally and deployed to Azu ## Test the function locally --> [!NOTE] -> To run the V4 programming model, your app needs to have the `EnableWorkerIndexing` feature flag set. When running locally, you need to set `AzureWebJobsFeaturesFlags` to value of `EnableWorkerIndexing` in your `local.settings.json` file. This should already be set when creating your project. To verify, check the following line exists in your `local.settings.json` file, and add it if it doesn't. -> -> ```json -> "AzureWebJobsFeatureFlags": "EnableWorkerIndexing" -> ``` -- Azure Functions Core Tools lets you run an Azure Functions project on your local development computer. You're prompted to install these tools the first time you start a function from Visual Studio Code. ::: zone pivot="nodejs-model-v3" After you've verified that the function runs correctly on your local computer, i [!INCLUDE [functions-publish-project-vscode](../../../includes/functions-publish-project-vscode.md)] --## Update app settings --To enable your V4 programming model app to run in Azure, you need to add the `EnableWorkerIndexing` flag under the `AzureWebJobsFeatureFlags` app setting. --1. In Visual Studio Code, press <kbd>F1</kbd> to open the command palette. In the command palette, search for and select `Azure Functions: Add New Setting...`. -2. Choose your new function app, then type `AzureWebJobsFeatureFlags` for the new app setting name, and press <kbd>Enter</kbd>. -3. For the value, type `EnableWorkerIndexing` and press <kbd>Enter</kbd>. -- ## Test your function in Azure ::: zone pivot="nodejs-model-v4" > [!NOTE]-> To use the V4 node programming model, make sure your app is running on at least version 4.16.5 of the Azure Functions runtime. +> To use the V4 node programming model, make sure your app is running on at least version 4.25 of the Azure Functions runtime. ::: zone-end ::: zone pivot="nodejs-model-v3" |
azure-functions | Quickstart Ts Vscode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-ts-vscode.md | To complete this tutorial: * Make sure you have the latest version of the [Azure Functions Core Tools](../functions-run-local.md). ::: zone-end ::: zone pivot="nodejs-model-v4"-* Make sure you have [Azure Functions Core Tools](../functions-run-local.md) version `v4.0.5095` or above. +* Make sure you have [Azure Functions Core Tools](../functions-run-local.md) version `v4.0.5382` or above. ::: zone-end * Durable Functions require an Azure storage account. You need an Azure subscription. In this section, you use Visual Studio Code to create a local Azure Functions pr | Prompt | Value | Description | | | -- | -- | | Select a language for your function app project | TypeScript | Create a local Node.js Functions project using TypeScript. |- | Select a JavaScript programming model | Model V4 (Preview) | Choose the V4 programming model (in preview). | + | Select a JavaScript programming model | Model V4 | Choose the V4 programming model. | | Select a version | Azure Functions v4 | You only see this option when the Core Tools aren't already installed. In this case, Core Tools are installed the first time you run the app. | | Select a template for your project's first function | Skip for now | | | Select how you would like to open your project | Open in current window | Reopens VS Code in the folder you selected. | You now have a Durable Functions app that can be run locally and deployed to Azu Azure Functions Core Tools lets you run an Azure Functions project on your local development computer. You're prompted to install these tools the first time you start a function from Visual Studio Code. --> [!NOTE] -> To run the V4 programming model, your app needs to have the `EnableWorkerIndexing` feature flag set. When running locally, you need to set `AzureWebJobsFeaturesFlags` to value of `EnableWorkerIndexing` in your `local.settings.json` file. This should already be set when creating your project. To verify, check the following line exists in your `local.settings.json` file, and add it if it doesn't. -> -> ```json -> "AzureWebJobsFeatureFlags": "EnableWorkerIndexing" -> ``` -- ::: zone pivot="nodejs-model-v3" 1. To test your function, set a breakpoint in the `Hello` activity function code (*Hello/index.ts*). Press F5 or select `Debug: Start Debugging` from the command palette to start the function app project. Output from Core Tools is displayed in the **Terminal** panel. After you've verified that the function runs correctly on your local computer, i [!INCLUDE [functions-publish-project-vscode](../../../includes/functions-publish-project-vscode.md)] --## Update app settings --To enable your V4 programming model app to run in Azure, you need to add the `EnableWorkerIndexing` flag under the `AzureWebJobsFeatureFlags` app setting. --1. In Visual Studio Code, press <kbd>F1</kbd> to open the command palette. In the command palette, search for and select `Azure Functions: Add New Setting...`. -2. Choose your new function app, then type `AzureWebJobsFeatureFlags` for the new app setting name, and press <kbd>Enter</kbd>. -3. For the value, type `EnableWorkerIndexing` and press <kbd>Enter</kbd>.' -- ## Test your function in Azure ::: zone pivot="nodejs-model-v4" > [!NOTE]-> To use the V4 node programming model, make sure your app is running on at least version 4.16.5 of the Azure Functions runtime. +> To use the V4 node programming model, make sure your app is running on at least version 4.25 of the Azure Functions runtime. ::: zone-end ::: zone pivot="nodejs-model-v3" |
azure-functions | Functions App Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md | The following major runtime version values are supported: | `~2` | 2.x | No longer supported | | `~1` | 1.x | Supported | +## FUNCTIONS\_NODE\_BLOCK\_ON\_ENTRY\_POINT\_ERROR ++This app setting is a temporary way for Node.js apps to enable a breaking change that makes entry point errors easier to troubleshoot on Node.js v18 or lower. It's highly recommended to use `true`, especially for programming model v4 apps, which always use entry point files. The behavior without the breaking change (`false`) ignores entry point errors and doesn't log them in Application Insights. ++Starting with Node.js v20, the app setting has no effect and the breaking change behavior is always enabled. ++For Node.js v18 or lower, the app setting can be used and the default behavior depends on if the error happens before or after a model v4 function has been registered: +- If the error is thrown before (for example if you're using model v3 or your entry point file doesn't exist), the default behavior matches `false`. +- If the error is thrown after (for example if you try to register duplicate model v4 functions), the default behavior matches `true`. ++|Key|Value|Description| +||--|--| +|FUNCTIONS\_NODE\_BLOCK\_ON\_ENTRY\_POINT\_ERROR|`true`|Block on entry point errors and log them in Application Insights.| +|FUNCTIONS\_NODE\_BLOCK\_ON\_ENTRY\_POINT\_ERROR|`false`|Ignore entry point errors and don't log them in Application Insights.| + ## FUNCTIONS\_V2\_COMPATIBILITY\_MODE This setting enables your function app to run in a version 2.x compatible mode on the version 3.x runtime. Use this setting only if encountering issues after upgrading your function app from version 2.x to 3.x of the runtime. |
azure-functions | Functions Node Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-node-troubleshoot.md | + + Title: Troubleshoot Node.js apps in Azure Functions +description: Learn how to troubleshoot common errors when you deploy or run a Node.js app in Azure Functions. + Last updated : 09/20/2023+ms.devlang: javascript, typescript +++zone_pivot_groups: functions-nodejs-model +++# Troubleshoot Node.js apps in Azure Functions +++This article provides a guide for troubleshooting common scenarios in Node.js function apps. ++The **Diagnose and solve problems** tab in the [Azure portal](https://portal.azure.com) is a useful resource to monitor and diagnose possible issues related to your application. It also supplies potential solutions to your problems based on the diagnosis. For more information, see [Azure Function app diagnostics](./functions-diagnostics.md). ++Another useful resource is the **Logs** tab in the [Azure portal](https://portal.azure.com) for your Application Insights instance so that you can run custom [KQL queries](/azure/data-explorer/kusto/query/). The following example query shows how to view errors and warnings for your app in the past day: ++```kusto +let myAppName = "<your app name>"; +let startTime = ago(1d); +let endTime = now(); +union traces,requests,exceptions +| where cloud_RoleName =~ myAppName +| where timestamp between (startTime .. endTime) +| where severityLevel > 2 +``` ++If those resources didn't solve your problem, the following sections provide advice for specific application issues: ++## No functions found ++If you see any of the following errors in your logs: ++> No HTTP triggers found. ++> No job functions found. Try making your job classes and methods public. If you're using binding extensions (e.g. Azure Storage, ServiceBus, Timers, etc.) make sure you've called the registration method for the extension(s) in your startup code (e.g. builder.AddAzureStorage(), builder.AddServiceBus(), builder.AddTimers(), etc.). ++Try the following fixes: ++- When running locally, make sure you're using Azure Functions Core Tools v4.0.5382 or higher. +- When running in Azure: + - Make sure you're using [Azure Functions Runtime Version](./functions-versions.md) 4.25 or higher. + - Make sure you're using Node.js v18 or higher. + - Set the app setting `FUNCTIONS_NODE_BLOCK_ON_ENTRY_POINT_ERROR` to `true`. This setting is recommended for all model v4 apps and ensures that all entry point errors are visible in your application insights logs. For more information, see [App settings reference for Azure Functions](./functions-app-settings.md#functions_node_block_on_entry_point_error). + - Check your function app logs for entry point errors. The following example query shows how to view entry point errors for your app in the past day: ++ ```kusto + let myAppName = "<your app name>"; + let startTime = ago(1d); + let endTime = now(); + union traces,requests,exceptions + | where cloud_RoleName =~ myAppName + | where timestamp between (startTime .. endTime) + | where severityLevel > 2 + | where message has "entry point" + ``` ++- Make sure your app has the [required folder structure](./functions-reference-node.md?pivots=nodejs-model-v3#folder-structure) with a *host.json* at the root and a folder for each function containing a *function.json* file. ++## Undici request is not a constructor ++If you get the following error in your function app logs: ++> System.Private.CoreLib: Exception while executing function: Functions.httpTrigger1. System.Private.CoreLib: Result: Failure +> Exception: undici_1.Request is not a constructor ++Make sure you're using Node.js version 18.x or higher. ++## Failed to detect the Azure Functions runtime ++If you get the following error in your function app logs: ++> WARNING: Failed to detect the Azure Functions runtime. Switching "@azure/functions" package to test mode - not all features are supported. ++Check your `package.json` file for a reference to `applicationinsights` and make sure the version is `^2.7.1` or higher. After updating the version, run `npm install` ++## Get help from Microsoft ++You can get more help from Microsoft in one of the following ways: ++- Search the known issues in the [Azure Functions Node.js repository](https://github.com/Azure/azure-functions-nodejs-library/issues). If you don't see your issue mentioned, create a new issue and let us know what has happened. +- If you're not able to diagnose your problem using this guide, Microsoft support engineers are available to help diagnose issues with your application. Microsoft offers [various support plans](https://azure.microsoft.com/support/plans). Create a support ticket in the **Support + troubleshooting** section of your function app page in the [Azure portal](https://portal.azure.com). ++## Next steps ++- [Microsoft Q&A page for Azure Functions](/answers/tags/87/azure-functions) +- [Azure Functions Node.js developer guide](functions-reference-node.md) |
azure-functions | Functions Node Upgrade V4 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-node-upgrade-v4.md | Title: Upgrade to v4 of the Node.js model for Azure Functions + Title: Migrate to v4 of the Node.js model for Azure Functions description: This article shows you how to upgrade your existing function apps running on v3 of the Node.js programming model to v4. Last updated 03/15/2023-# Upgrade to version 4 of the Node.js programming model for Azure Functions +# Migrate to version 4 of the Node.js programming model for Azure Functions This article discusses the differences between version 3 and version 4 of the Node.js programming model and how to upgrade an existing v3 app. If you want to create a new v4 app instead of upgrading an existing v3 app, see the tutorial for either [Visual Studio Code (VS Code)](./create-first-function-cli-node.md) or [Azure Functions Core Tools](./create-first-function-vs-code-node.md). This article uses "tip" alerts to highlight the most important concrete actions that you should take to upgrade your app. Version 4 is designed to provide Node.js developers with the following benefits: Version 4 of the Node.js programming model requires the following minimum versions: -- [`@azure/functions`](https://www.npmjs.com/package/@azure/functions) npm package v4.0.0-alpha.9++- [`@azure/functions`](https://www.npmjs.com/package/@azure/functions) npm package v4.0.0 - [Node.js](https://nodejs.org/en/download/releases/) v18+ - [TypeScript](https://www.typescriptlang.org/) v4+-- [Azure Functions Runtime](./functions-versions.md) v4.16+-- [Azure Functions Core Tools](./functions-run-local.md) v4.0.5095+ (if running locally)--## Enable the v4 programming model --To indicate that your function code is using the v4 model, you need to set the `EnableWorkerIndexing` flag on the `AzureWebJobsFeatureFlags` application setting. When you're running locally, add `AzureWebJobsFeatureFlags` with a value of `EnableWorkerIndexing` to your *local.settings.json* file. When you're running in Azure, you add this application setting by using the tool of your choice. --# [Azure CLI](#tab/azure-cli-set-indexing-flag) --Replace `<FUNCTION_APP_NAME>` and `<RESOURCE_GROUP_NAME>` with the name of your function app and resource group, respectively. --```azurecli -az functionapp config appsettings set --name <FUNCTION_APP_NAME> --resource-group <RESOURCE_GROUP_NAME> --settings AzureWebJobsFeatureFlags=EnableWorkerIndexing -``` --# [Azure PowerShell](#tab/azure-powershell-set-indexing-flag) --Replace `<FUNCTION_APP_NAME>` and `<RESOURCE_GROUP_NAME>` with the name of your function app and resource group, respectively. --```azurepowershell -Update-AzFunctionAppSetting -Name <FUNCTION_APP_NAME> -ResourceGroupName <RESOURCE_GROUP_NAME> -AppSetting @{"AzureWebJobsFeatureFlags" = "EnableWorkerIndexing"} -``` --# [VS Code](#tab/vs-code-set-indexing-flag) --1. Make sure you have the [Azure Functions extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) installed. -1. Select the <kbd>F1</kbd> key to open the command palette. In the command palette, search for and select **Azure Functions: Add New Setting**. -1. Choose your subscription and function app when prompted. -1. For the name, type **AzureWebJobsFeatureFlags** and select the <kbd>Enter</kbd> key. -1. For the value, type **EnableWorkerIndexing** and select the <kbd>Enter</kbd> key. --+- [Azure Functions Runtime](./functions-versions.md) v4.25+ +- [Azure Functions Core Tools](./functions-run-local.md) v4.0.5382+ (if running locally) ## Include the npm package In v4, the [`@azure/functions`](https://www.npmjs.com/package/@azure/functions) > [!TIP] > Make sure the `@azure/functions` package is listed in the `dependencies` section (not `devDependencies`) of your *package.json* file. You can install v4 by using the following command: > ```-> npm install @azure/functions@preview +> npm install @azure/functions > ``` ## Set your app entry point The types use the [`undici`](https://undici.nodejs.org/) package in Node.js. Thi ## Troubleshoot -If you get the following error, make sure that you [set the `EnableWorkerIndexing` flag](#enable-the-v4-programming-model) and that you're using the minimum version of all [requirements](#requirements): --> No job functions found. Try making your job classes and methods public. If you're using binding extensions (e.g. Azure Storage, ServiceBus, Timers, etc.) make sure you've called the registration method for the extension(s) in your startup code (e.g. builder.AddAzureStorage(), builder.AddServiceBus(), builder.AddTimers(), etc.). --If you get the following error, make sure that you're using Node.js version 18.x: --> System.Private.CoreLib: Exception while executing function: Functions.httpTrigger1. System.Private.CoreLib: Result: Failure -> Exception: undici_1.Request is not a constructor --For any other problems or to give feedback, file an issue in the [Azure Functions Node.js repository](https://github.com/Azure/azure-functions-nodejs-library/issues). +See the [Node.js Troubleshoot guide](./functions-node-troubleshoot.md). |
azure-functions | Functions Reference Node | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-node.md | zone_pivot_groups: functions-nodejs-model This guide is an introduction to developing Azure Functions using JavaScript or TypeScript. The article assumes that you have already read the [Azure Functions developer guide](functions-reference.md). > [!IMPORTANT]-> The content of this article changes based on your choice of the Node.js programming model in the selector at the top of this page. The version you choose should match the version of the [`@azure/functions`](https://www.npmjs.com/package/@azure/functions) npm package you are using in your app. If you do not have that package listed in your `package.json`, the default is v3. Learn more about the differences between v3 and v4 in the [upgrade guide](./functions-node-upgrade-v4.md). +> The content of this article changes based on your choice of the Node.js programming model in the selector at the top of this page. The version you choose should match the version of the [`@azure/functions`](https://www.npmjs.com/package/@azure/functions) npm package you are using in your app. If you do not have that package listed in your `package.json`, the default is v3. Learn more about the differences between v3 and v4 in the [migration guide](./functions-node-upgrade-v4.md). As a Node.js developer, you might also be interested in one of the following articles: The following table shows each version of the Node.js programming model along wi | [Programming Model Version](https://www.npmjs.com/package/@azure/functions?activeTab=versions) | Support Level | [Functions Runtime Version](./functions-versions.md) | [Node.js Version](https://github.com/nodejs/release#release-schedule) | Description | | - | - | | | |-| 4.x | Preview | 4.16+ | 20.x (Preview), 18.x | Supports a flexible file structure and code-centric approach to triggers and bindings. | +| 4.x | GA | 4.25+ | 20.x (Preview), 18.x | Supports a flexible file structure and code-centric approach to triggers and bindings. | | 3.x | GA | 4.x | 20.x (Preview), 18.x, 16.x, 14.x | Requires a specific file structure with your triggers and bindings declared in a "function.json" file | | 2.x | GA (EOL) | 3.x | 14.x, 12.x, 10.x | Reached end of life (EOL) on December 13, 2022. See [Functions Versions](./functions-versions.md) for more info. | | 1.x | GA (EOL) | 2.x | 10.x, 8.x | Reached end of life (EOL) on December 13, 2022. See [Functions Versions](./functions-versions.md) for more info. | export default trigger1; ::: zone-end +## Troubleshoot ++See the [Node.js Troubleshoot guide](./functions-node-troubleshoot.md). + ## Next steps For more information, see the following resources: |
azure-monitor | Container Insights Enable Arc Enabled Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md | +- To view the monitoring data, you need to have [Monitoring Reader](../roles-permissions-security.md#monitoring-reader) or [Monitoring Contributor](../roles-permissions-security.md#monitoring-contributor) role. - The following endpoints need to be enabled for outbound access in addition to the [Azure Arc-enabled Kubernetes network requirements](../../azure-arc/kubernetes/network-requirements.md). **Azure public cloud** az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-n az deployment group create --resource-group <resource-group> --template-file ./arc-k8s-azmon-extension-arm-template.json --parameters @./arc-k8s-azmon-extension-arm-template-params.json ``` + ## Verify extension installation status Run the following command to show the latest status of the `Microsoft.AzureMonit az k8s-extension show --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters -n azuremonitor-containers ``` + ## Migrate to managed identity authentication az k8s-extension create --name azuremonitor-containers --cluster-name \<cluster- ## [Resource Manager](#tab/migrate-arm) + 1. Download the template at [https://aka.ms/arc-k8s-azmon-extension-msi-arm-template](https://aka.ms/arc-k8s-azmon-extension-msi-arm-template) and save it as **arc-k8s-azmon-extension-msi-arm-template.json**. 2. Download the parameter file at [https://aka.ms/arc-k8s-azmon-extension-msi-arm-template-params](https://aka.ms/arc-k8s-azmon-extension-msi-arm-template) and save it as **arc-k8s-azmon-extension-msi-arm-template-params.json**. az account set --subscription "Subscription Name" az deployment group create --resource-group <resource-group> --template-file ./arc-k8s-azmon-extension-msi-arm-template.json --parameters @./arc-k8s-azmon-extension-msi-arm-template-params.json ``` + ## Delete extension instance |
azure-monitor | Container Insights Logging V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-logging-v2.md | Follow the instructions to configure an existing ConfigMap or to use a new one. 1. For configuring via CLI, use the corresponding [config file](./container-insights-cost-config.md#configuring-aks-data-collection-settings-using-azure-cli), update the `enableContainerLogV2` field in the config file to be true. + ### Configure an existing ConfigMap Additionally, the feature also adds support for .NET and Go stack traces, which Customers must [enable ContainerLogV2](./container-insights-logging-v2.md#enable-the-containerlogv2-schema) for multi-line logging to work. ### How to enable -Multi-line logging can be enabled by setting *enable_multiline_logs* flag to ΓÇ£trueΓÇ¥ in [the config map](https://github.com/microsoft/Docker-Provider/blob/ci_prod/kubernetes/container-azm-ms-agentconfig.yaml#L49) +Multi-line logging is a preview feature and can be enabled by setting **enabled** flag to "true" under the `[log_collection_settings.enable_multiline_logs]` section in the the [config map](https://github.com/microsoft/Docker-Provider/blob/ci_prod/kubernetes/container-azm-ms-agentconfig.yaml) -### Next steps for Multi-line logging -* Read more about the [ContainerLogV2 schema](https://aka.ms/ContainerLogv2) +```yaml +[log_collection_settings.enable_multiline_logs] +# fluent-bit based multiline log collection for go (stacktrace), dotnet (stacktrace) +# if enabled will also stitch together container logs split by docker/cri due to size limits(16KB per log line) +enabled = "true" +``` ## Next steps * Configure [Basic Logs](../logs/basic-logs-configure.md) for ContainerLogv2. * Learn how [query data](./container-insights-log-query.md#container-logs) from ContainerLogV2+ |
azure-monitor | Container Insights V2 Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-v2-migration.md | + + Title: Migrate from ContainerLog to ContainerLogV2 +description: This article describes the transition plan from the ContainerLog to ContainerLogV2 table + Last updated : 07/19/2023++++# Migrate from ContainerLog to ContainerLogV2 ++With the upgraded offering of ContainerLogV2 becoming generally available, on 30th September 2026, the ContainerLog table will be retired. If you currently ingest container insights data to the ContainerLog table, please make sure to transition to using ContainerLogV2 prior to that date. ++>[!NOTE] +> Support for ingesting the ContainerLog table will be **retired on 30th September 2026**. ++## Steps to complete the transition ++To transition to ContainerLogV2, we recommend the following approach. ++1. Learn about the feature differences between ContainerLog and ContainerLogV2 +2. Assess the impact migrating to ContainerLogV2 may have on your existing queries, alerts, or dashboards +3. Enable the ContainerLogV2 schema through either the container insights data collection rules (DCRs) or ConfigMap +4. Validate that you are now ingesting ContainerLogV2 to your Log Analytics workspace. ++## ContainerLog vs ContainerLogV2 schema ++The following table highlights the key differences between using ContainerLog and ContainerLogV2 schema. ++| Feature Differences | ContainerLog | ContainerLogV2 | +| - | -- | - | +| Onboarding | Only configurable through the ConfigMap | Configurable through both the ConfigMap and DCR | +| Pricing | Only compatible with full-priced analytics logs | Supports the low cost basic logs tier in addition to analytics logs | +| Querying | Requires multiple join operations with inventory tables for standard queries | Includes additional pod and container metadata to reduce query complexity and join operations | +| Multiline | Not supported, multiline entries are split into multiple rows | Support for multiline logging to allow consolidated, single entries for multiline output | ++## Assess the impact on existing alerts ++If you are currently using ContainerLog in your alerts, then migrating to ContainerLogV2 will require updates to your alert queries for them to continue functioning as expected. ++To scan for alerts that may be referencing the ContainerLog table, run the following Azure Resource Graph query: ++```Kusto +resources +| where type in~ ('microsoft.insights/scheduledqueryrules') and ['kind'] !in~ ('LogToMetric') +| extend severity = strcat("Sev", properties["severity"]) +| extend enabled = tobool(properties["enabled"]) +| where enabled in~ ('true') +| where tolower(properties["targetResourceTypes"]) matches regex 'microsoft.operationalinsights/workspaces($|/.*)?' or tolower(properties["targetResourceType"]) matches regex 'microsoft.operationalinsights/workspaces($|/.*)?' or tolower(properties["scopes"]) matches regex 'providers/microsoft.operationalinsights/workspaces($|/.*)?' +| where properties contains "ContainerLog" +| project id,name,type,properties,enabled,severity,subscriptionId +| order by tolower(name) asc +``` ++## Next steps +- [Enable ContainerLogV2](container-insights-logging-v2.md) |
azure-monitor | Resource Logs Schema | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-schema.md | A combination of the resource type (available in the `resourceId` property) and | Name | Required or optional | Description | ||||-| `time` | Required | The timestamp (UTC) of the event. | +| `time` | Required | The timestamp (UTC) of the event being logged. | | `resourceId` | Required | The resource ID of the resource that emitted the event. For tenant services, this is of the form */tenants/tenant-id/providers/provider-name*. | | `tenantId` | Required for tenant logs | The tenant ID of the Active Directory tenant that this event is tied to. This property is used only for tenant-level logs. It does not appear in resource-level logs. |-| `operationName` | Required | The name of the operation that this event represents. If the event represents an Azure role-based access control (RBAC) operation, this is the Azure RBAC operation name (for example, `Microsoft.Storage/storageAccounts/blobServices/blobs/Read`). This name is typically modeled in the form of an Azure Resource Manager operation, even if it's not a documented Resource Manager operation: (`Microsoft.<providerName>/<resourceType>/<subtype>/<Write/Read/Delete/Action>`). | +| `operationName` | Required | The name of the operation that this event is logging, for example `Microsoft.Storage/storageAccounts/blobServices/blobs/Read`. The operationName is typically modeled in the form of an Azure Resource Manager operation, `Microsoft.<providerName>/<resourceType>/<subtype>/<Write|Read|Delete|Action>`, even if it's not a documented Resource Manager operation. | | `operationVersion` | Optional | The API version associated with the operation, if `operationName` was performed through an API (for example, `http://myservice.windowsazure.net/object?api-version=2016-06-01`). If no API corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |-| `category` | Required | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. Typical log categories are `Audit`, `Operational`, `Execution`, and `Request`. | -| `resultType` | Optional | The status of the event. Typical values include `Started`, `In Progress`, `Succeeded`, `Failed`, `Active`, and `Resolved`. | +| `category` | Required | The log category of the event being logged. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. Typical log categories are `Audit`, `Operational`, `Execution`, and `Request`. | +| `resultType` | Optional | The status of the logged event, if applicable. Values include `Started`, `In Progress`, `Succeeded`, `Failed`, `Active`, and `Resolved`. | | `resultSignature` | Optional | The substatus of the event. If this operation corresponds to a REST API call, this field is the HTTP status code of the corresponding REST call. | | `resultDescription `| Optional | The static text description of this operation; for example, `Get storage file`. | | `durationMs` | Optional | The duration of the operation in milliseconds. | |
azure-monitor | Vminsights Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-performance.md | -VM insights includes a set of performance charts that target several key performance indicators to help you determine how well a virtual machine is performing. The charts show resource utilization over a period of time. You can use them to identify bottlenecks and anomalies. You can also switch to a perspective that lists each machine to view resource utilization based on the metric selected. +VM insights includes a set of performance charts that target several key [performance indicators](vminsights-log-query.md#performance-records) to help you determine how well a virtual machine is performing. The charts show resource utilization over a period of time. You can use them to identify bottlenecks and anomalies. You can also switch to a perspective that lists each machine to view resource utilization based on the metric selected. VM insights monitors key operating system performance indicators related to processor, memory, network adapter, and disk utilization. Performance complements the health monitoring feature and helps to: Selecting the pushpin icon in the upper-right corner of a chart pins it to the l - Learn how to use [workbooks](vminsights-workbooks.md) that are included with VM insights to further analyze performance and network metrics. - To learn about discovered application dependencies, see [View VM insights Map](vminsights-maps.md).++ |
azure-netapp-files | Azure Government | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-government.md | All [Azure NetApp Files features](whats-new.md) available on Azure public cloud |: |: |: | | Azure NetApp Files backup | Public preview | No | | Azure NetApp Files datastores for AVS | Generally available (GA) | No | -| Azure NetApp Files customer-managed keys | Public preview | No | +| Azure NetApp Files customer-managed keys | Public preview | Public preview [(in select regions)](configure-customer-managed-keys.md#supported-regions) | | Azure NetApp Files large volumes | Public preview | No | | Edit network features for existing volumes | Public preview | No | | Standard network features | Generally available (GA) | Public preview [(in select regions)](azure-netapp-files-network-topologies.md#supported-regions) | |
azure-netapp-files | Configure Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md | Azure NetApp Files customer-managed keys is supported for the following regions: * UAE Central * UAE North * UK South+* US Gov Virginia (public preview) * West Europe * West US * West US 2 |
azure-netapp-files | Test Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/test-disaster-recovery.md | -# Test disaster recovery for Azure NetApp Files +# Test disaster recovery using cross-region replication for Azure NetApp Files An effective disaster recovery plan includes testing your disaster recovery configuration. Testing your disaster recovery configuration demonstrates the efficacy of your disaster recovery configuration and that it can achieve the desired recovery point objective (RPO) and recovery time objective (RTO). Testing disaster recovery also ensures that operational runbooks are accurate and that operational staff are trained on the workflow. |
azure-netapp-files | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md | Azure NetApp Files is updated regularly. This article provides a summary about t ## September 2023 +* [Azure NetApp Files customer-managed keys for Azure NetApp Files volume encryption is now available in select US Gov regions (Preview)](configure-customer-managed-keys.md#supported-regions) ++ Customer keys are protected from attacks for maximum security of their Azure NetApp File volumes. This capability is now available US Gov Virginia (preview). This increased security complements the additional security for deployments in US Gov. Healthcare, Finance, Government, and many other customers can now protect their customer-managed encryption keys within the secure confines of US Gov Virginia region. + * [Standard network features in select US Gov regions (Preview)](azure-netapp-files-network-topologies.md) Azure NetApp Files now supports Standard network features for new volumes in select US Gov regions. Standard network features provide an enhanced virtual networking experience through various features for a seamless and consistent experience with security posture of all their workloads including Azure NetApp Files. You can now choose Standard or Basic network features when creating a new Azure NetApp Files volume. This feature is Generally Available in Azure commercial regions and public preview US Gov region(s). |
azure-portal | Get Subscription Tenant Id | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/get-subscription-tenant-id.md | Title: Get subscription and tenant IDs in the Azure portal description: Learn how to locate and copy the IDs of Azure tenants and subscriptions. Previously updated : 09/22/2023 Last updated : 09/27/2023 # Get subscription and tenant IDs in the Azure portal -A tenant is an [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) entity that typically encompasses an organization. Tenants can have one or more subscriptions, which are agreements with Microsoft to use cloud services, including Azure. Every Azure resource is associated with a subscription. +A tenant is a [Microsoft Entra ID](../active-directory/fundamentals/active-directory-whatis.md) entity that typically encompasses an organization. Tenants can have one or more subscriptions, which are agreements with Microsoft to use cloud services, including Azure. Every Azure resource is associated with a subscription. Each subscription has an ID associated with it, as does the tenant to which a subscription belongs. As you perform different tasks, you may need the ID for a subscription or tenant. You can find these values in the Azure portal. Follow these steps to retrieve the ID for a subscription in the Azure portal. 1. Find the subscription in the list, and note the **Subscription ID** shown in the second column. If no subscriptions appear, or you don't see the right one, you may need to [switch directories](set-preferences.md#switch-and-manage-directories) to show the subscriptions from a different Azure AD tenant. 1. To easily copy the **Subscription ID**, select the subscription name to display more details. Select the **Copy to clipboard** icon shown next to the **Subscription ID** in the **Essentials** section. You can paste this value into a text document or other location. + :::image type="content" source="media/get-subscription-tenant-id/copy-subscription-id.png" alt-text="Screenshot showing the option to copy a subscription ID in the Azure portal."::: + > [!TIP] > You can also list your subscriptions and view their IDs programmatically by using [Get-AzSubscription](/powershell/module/az.accounts/get-azsubscription) (Azure PowerShell) or [az account list](/cli/azure/account#az-account-list) (Azure CLI). ## Find your Azure AD tenant -Follow these steps to retrieve the ID for an Azure AD tenant in the Azure portal. +Follow these steps to retrieve the ID for a Microsoft Entra tenant in the Azure portal. 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Confirm that you are signed into the tenant for which you want to retrieve the ID. If not, [switch directories](set-preferences.md#switch-and-manage-directories) so that you're working in the right tenant. Follow these steps to retrieve the ID for an Azure AD tenant in the Azure portal 1. Find the **Tenant ID** in the **Basic information** section of the **Overview** screen. 1. Copy the **Tenant ID** by selecting the **Copy to clipboard** icon shown next to it. You can paste this value into a text document or other location. + :::image type="content" source="media/get-subscription-tenant-id/copy-tenant-id.png" alt-text="Screenshot showing the option to copy a tenant ID in the Azure portal."::: + > [!TIP] > You can also find your tenant programmatically by using [Azure Powershell](/azure/active-directory/fundamentals/how-to-find-tenant#find-tenant-id-with-powershell) or [Azure CLI](/azure/active-directory/fundamentals/how-to-find-tenant#find-tenant-id-with-cli). |
azure-resource-manager | Conditional Resource Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/conditional-resource-deployment.md | Sometimes you need to optionally deploy a resource or module in Bicep. Use the ` If you would rather learn about conditions through step-by-step guidance, see [Build flexible Bicep templates by using conditions and loops](/training/modules/build-flexible-bicep-templates-conditions-loops/). -## Deploy condition +## Define condition for deployment -You can pass in a parameter value that indicates whether a resource is deployed. The following example conditionally deploys a DNS zone. +In Bicep, you can conditionally deploy a resource by passing in a parameter that specifies whether the resource is deployed. You test the condition with an `if` statement in the resource declaration. The following example shows a Bicep file that conditionally deploys a DNS zone. When `deployZone` is `true`, it deploys the DNS zone. When `deployZone` is `false`, it skips deploying the DNS zone. ```bicep param deployZone bool |
azure-resource-manager | Create Resource Group | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/create-resource-group.md | + + Title: Use Bicep to create a new resource group +description: Describes how to use Bicep to create a new resource group in your Azure subscription. + Last updated : 09/26/2023+++# Create resource groups by using Bicep ++You can use Bicep to create a new resource group. This article shows you how to create resource groups when deploying to either the subscription or another resource group. ++## Define resource group ++To create a resource group with Bicep, define a [Microsoft.Resources/resourceGroups](/azure/templates/microsoft.resources/allversions) resource with a name and location for the resource group. ++The following example shows a Bicep file that creates an empty resource group. Notice that its target scope is `subscription`. ++```bicep +targetScope='subscription' ++param resourceGroupName string +param resourceGroupLocation string ++resource newRG 'Microsoft.Resources/resourceGroups@2022-09-01' = { + name: resourceGroupName + location: resourceGroupLocation +} +``` ++To deploy the Bicep file to a subscription, use the subscription-level deployment commands. ++For Azure CLI, use [az deployment sub create](/cli/azure/deployment/sub#az-deployment-sub-create). ++```azurecli-interactive +az deployment sub create \ + --name demoSubDeployment \ + --location centralus \ + --template-file resourceGroup.bicep \ + --parameters resourceGroupName=demoResourceGroup resourceGroupLocation=centralus +``` ++For the PowerShell deployment command, use [New-AzDeployment](/powershell/module/az.resources/new-azdeployment) or its alias `New-AzSubscriptionDeployment`. ++```azurepowershell-interactive +New-AzSubscriptionDeployment ` + -Name demoSubDeployment ` + -Location centralus ` + -TemplateFile resourceGroup.bicep ` + -resourceGroupName demoResourceGroup ` + -resourceGroupLocation centralus +``` ++## Create resource group and resources ++To create the resource group and deploy resources to it, add a module that defines the resources to deploy to the resource group. Set the scope for the module to the symbolic name for the resource group you create. You can deploy to up to 800 resource groups. ++The following example shows a Bicep file that creates a resource group, and deploys a storage account to the resource group. Notice that the `scope` property for the module is set to `newRG`, which is the symbolic name for the resource group that is being created. ++```bicep +targetScope='subscription' ++param resourceGroupName string +param resourceGroupLocation string +param storageName string +param storageLocation string ++resource newRG 'Microsoft.Resources/resourceGroups@2022-09-01' = { + name: resourceGroupName + location: resourceGroupLocation +} ++module storageAcct 'storage.bicep' = { + name: 'storageModule' + scope: newRG + params: { + storageLocation: storageLocation + storageName: storageName + } +} +``` ++The module uses a Bicep file named **storage.bicep** with the following contents: ++```bicep +param storageLocation string +param storageName string ++resource storageAcct 'Microsoft.Storage/storageAccounts@2022-09-01' = { + name: storageName + location: storageLocation + sku: { + name: 'Standard_LRS' + } + kind: 'Storage' + properties: {} +} +``` ++## Create resource group during resource group deployment ++You can also create a resource group during a resource group level deployment. For that scenario, you deploy to an existing resource group and switch to the level of a subscription to create a resource group. The following Bicep file creates a new resource group in the specified subscription. The module that creates the resource group is the same as the example that creates the resource group. ++```bicep +param secondResourceGroup string +param secondSubscriptionID string = '' +param secondLocation string ++// module deployed at subscription level +module newRG 'resourceGroup.bicep' = { + name: 'newResourceGroup' + scope: subscription(secondSubscriptionID) + params: { + resourceGroupName: secondResourceGroup + resourceGroupLocation: secondLocation + } +} +``` ++To deploy to a resource group, use the resource group deployment commands. ++For Azure CLI, use [az deployment group create](/cli/azure/deployment/group#az-deployment-group-create). ++```azurecli-interactive +az deployment group create \ + --name demoRGDeployment \ + --resource-group ExampleGroup \ + --template-file main.bicep \ + --parameters secondResourceGroup=newRG secondSubscriptionID={sub-id} secondLocation=westus +``` ++For the PowerShell deployment command, use [New-AzResourceGroupDeployment](/powershell/module/az.resources/new-azresourcegroupdeployment). ++```azurepowershell-interactive +New-AzResourceGroupDeployment ` + -Name demoRGDeployment ` + -ResourceGroupName ExampleGroup ` + -TemplateFile main.bicep ` + -secondResourceGroup newRG ` + -secondSubscriptionID {sub-id} ` + -secondLocation westus +``` ++## Next steps ++To learn about other scopes, see: ++* [Resource group deployments](deploy-to-resource-group.md) +* [Subscription deployments](deploy-to-subscription.md) +* [Management group deployments](deploy-to-management-group.md) +* [Tenant deployments](deploy-to-tenant.md) |
azure-resource-manager | Deploy To Resource Group | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-to-resource-group.md | module exampleModule 'module.bicep' = { } ``` -For an example template, see [Create resource group](#create-resource-group). +For an example template, see [Create resource group with Bicep](create-resource-group.md). ### Scope to tenant resource storageAcct 'Microsoft.Storage/storageAccounts@2019-06-01' = { ## Create resource group -From a resource group deployment, you can switch to the level of a subscription and create a resource group. The following template deploys a storage account to the target resource group, and creates a new resource group in the specified subscription. --```bicep -@maxLength(11) -param storagePrefix string --param firstStorageLocation string = resourceGroup().location --param secondResourceGroup string -param secondSubscriptionID string = '' -param secondLocation string --var firstStorageName = '${storagePrefix}${uniqueString(resourceGroup().id)}' --// resource deployed to target resource group -module firstStorageAcct 'storage2.bicep' = { - name: 'storageModule1' - params: { - storageLocation: firstStorageLocation - storageName: firstStorageName - } -} --// module deployed to subscription -module newRG 'resourceGroup.bicep' = { - name: 'newResourceGroup' - scope: subscription(secondSubscriptionID) - params: { - resourceGroupName: secondResourceGroup - resourceGroupLocation: secondLocation - } -} -``` --The preceding example uses the following Bicep file for the module that creates the new resource group. --```bicep -targetScope='subscription' --param resourceGroupName string -param resourceGroupLocation string --resource newRG 'Microsoft.Resources/resourceGroups@2021-01-01' = { - name: resourceGroupName - location: resourceGroupLocation -} -``` +For information about creating resource groups, see [Create resource group with Bicep](create-resource-group.md). ## Next steps |
azure-resource-manager | Deploy To Subscription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-to-subscription.md | Title: Use Bicep to deploy resources to subscription -description: Describes how to create a Bicep file that deploys resources to the Azure subscription scope. It shows how to create a resource group. +description: Describes how to create a Bicep file that deploys resources to the Azure subscription scope. Previously updated : 06/23/2023 Last updated : 09/26/2023 # Subscription deployments with Bicep files -This article describes how to set scope with Bicep when deploying to a subscription. +To simplify the management of resources, you can deploy resources at the level of your Azure subscription. For example, you can deploy [policies](../../governance/policy/overview.md) and [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) to your subscription, which applies them across your subscription. -To simplify the management of resources, you can deploy resources at the level of your Azure subscription. For example, you can deploy [policies](../../governance/policy/overview.md) and [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) to your subscription, which applies them across your subscription. You can also create resource groups within the subscription and deploy resources to resource groups in the subscription. +This article describes how to set the deployment scope to a subscription in a Bicep file. > [!NOTE] > You can deploy to 800 different resource groups in a subscription level deployment. resource exampleResource 'Microsoft.Resources/resourceGroups@2022-09-01' = { } ``` -For examples of deploying to the subscription, see [Create resource groups](#create-resource-groups) and [Assign policy definition](#assign-policy-definition). +For examples of deploying to the subscription, see [Create resource groups with Bicep](create-resource-group.md) and [Assign policy definition](#assign-policy-definition). To deploy resources to a subscription that is different than the subscription from the operation, add a [module](modules.md). Use the [subscription function](bicep-functions-scope.md#subscription) to set the `scope` property. Provide the `subscriptionId` property to the ID of the subscription you want to deploy to. module exampleModule 'module.bicep' = { } ``` -If the resource group is created in the same Bicep file, use the symbolic name of the resource group to set the scope value. For an example of setting the scope to the symbolic name, see [Create resource group and resources](#create-resource-group-and-resources). +If the resource group is created in the same Bicep file, use the symbolic name of the resource group to set the scope value. For an example of setting the scope to the symbolic name, see [Create resource group with Bicep](create-resource-group.md). ### Scope to tenant For more information, see [Management group](deploy-to-management-group.md#manag ## Resource groups -### Create resource groups --To create a resource group, define a [Microsoft.Resources/resourceGroups](/azure/templates/microsoft.resources/allversions) resource with a name and location for the resource group. --The following example creates an empty resource group. --```bicep -targetScope='subscription' --param resourceGroupName string -param resourceGroupLocation string --resource newRG 'Microsoft.Resources/resourceGroups@2022-09-01' = { - name: resourceGroupName - location: resourceGroupLocation -} -``` --### Create resource group and resources --To create the resource group and deploy resources to it, add a module. The module includes the resources to deploy to the resource group. Set the scope for the module to the symbolic name for the resource group you create. You can deploy to up to 800 resource groups. --The following example creates a resource group, and deploys a storage account to the resource group. Notice that the `scope` property for the module is set to `newRG`, which is the symbolic name for the resource group that is being created. --```bicep -targetScope='subscription' --param resourceGroupName string -param resourceGroupLocation string -param storageName string -param storageLocation string --resource newRG 'Microsoft.Resources/resourceGroups@2022-09-01' = { - name: resourceGroupName - location: resourceGroupLocation -} --module storageAcct 'storage.bicep' = { - name: 'storageModule' - scope: newRG - params: { - storageLocation: storageLocation - storageName: storageName - } -} -``` --The module uses a Bicep file named **storage.bicep** with the following contents: --```bicep -param storageLocation string -param storageName string --resource storageAcct 'Microsoft.Storage/storageAccounts@2022-09-01' = { - name: storageName - location: storageLocation - sku: { - name: 'Standard_LRS' - } - kind: 'Storage' - properties: {} -} -``` +For information about creating resource groups, see [Create resource group with Bicep](create-resource-group.md). ## Azure Policy |
azure-resource-manager | User Defined Data Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/user-defined-data-types.md | Title: User-defined types in Bicep description: Describes how to define and use user-defined data types in Bicep. Previously updated : 09/20/2023 Last updated : 09/26/2023 # User-defined data types in Bicep In addition to be used in the `type` statement, type expressions can also be use param mixedTypeArray ('fizz' | 42 | {an: 'object'} | null)[] ``` -## An example - A typical Bicep file to create a storage account looks like: ```bicep resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = { } ``` +## Declare tagged union type ++To declare a custom tagged union data type within a Bicep file, you can place a discriminator decorator above a user-defined type declartion. [Bicep version 0.21.1 or newer](./install.md) is required to use this decorator. The syntax is: ++```bicep +@discriminator('<propertyName>') +``` ++The discriminator decorator takes a single parameter, which represents a shared property name among all union members. This property name must be a required string literal on all members and is case-sensitive. The values of the discriminated property on the union members must be unique in a case-insensitive manner. ++The following example shows how to declare a tagged union type: ++```bicep +type FooConfig = { + type: 'foo' + value: int +} ++type BarConfig = { + type: 'bar' + value: bool +} ++@discriminator('type') +type ServiceConfig = FooConfig | BarConfig | { type: 'baz', *: string } ++param serviceConfig ServiceConfig = { type: 'bar', value: true } ++output config object = serviceConfig +``` ++The parameter value is validated based on the discriminated property value. In the preceeding example, if the *serviceConfig* parameter value is of type *foo*, it undersoes validation using the *FooConfig*type. Likewise, if the parameter value is of type *bar*, validation is performed usin the *BarConfig* type, and this pattern continues for other types as well. + ## Import types between Bicep files (Preview) [Bicep version 0.21.1 or newer](./install.md) is required to use this compile-time import feature. The experimental flag `compileTimeImports` must be enabled from the [Bicep config file](./bicep-config.md#enable-experimental-features). |
azure-resource-manager | Azure Subscription Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md | Title: Azure subscription limits and quotas description: Provides a list of common Azure subscription and service limits, quotas, and constraints. This article includes information on how to increase limits along with maximum values. Previously updated : 08/24/2023 Last updated : 09/26/2023 # Azure subscription and service limits, quotas, and constraints |
azure-resource-manager | Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/best-practices.md | Title: Best practices for templates description: Describes recommended approaches for authoring Azure Resource Manager templates (ARM templates). Offers suggestions to avoid common problems when using templates. Previously updated : 09/01/2022 Last updated : 09/22/2023 # ARM template best practices This article shows you how to use recommended practices when constructing your A ## Template limits -Limit the size of your template to 4 MB. The 4-MB limit applies to the final state of the template after it has been expanded with iterative resource definitions, and values for variables and parameters. The parameter file is also limited to 4 MB. You may get an error with a template or parameter file of less than 4 MB if the total size of the request is too large. For more information about how to simplify your template to avoid a large request, see [Resolve errors for job size exceeded](error-job-size-exceeded.md). +Limit the size of your template to 4 MB, and each resource definition to 1 MB. The limits apply to the final state of the template after it has been expanded with iterative resource definitions, and values for variables and parameters. The parameter file is also limited to 4 MB. You may get an error with a template or parameter file of less than 4 MB if the total size of the request is too large. For more information about how to simplify your template to avoid a large request, see [Resolve errors for job size exceeded](error-job-size-exceeded.md). You're also limited to: |
azure-resource-manager | Error Job Size Exceeded | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/error-job-size-exceeded.md | Title: Job size exceeded error description: Describes how to troubleshoot errors for job size exceeded or if the template is too large for deployments using a Bicep file or Azure Resource Manager template (ARM template). Previously updated : 04/05/2023 Last updated : 09/22/2023 # Resolve errors for job size exceeded You get this error when the deployment exceeds an allowed limit. Typically, you The deployment job can't exceed 1 MB and that includes metadata about the request. For large templates, the metadata combined with the template might exceed a job's allowed size. -The template can't exceed 4 MB. The 4-MB limit applies to the final state of the template after it has been expanded for resource definitions that use loops to create many instances. The final state also includes the resolved values for variables and parameters. +The template can't exceed 4 MB, and each resource definition can't exceed 1 MB. The limits apply to the final state of the template after it has been expanded for resource definitions that use loops to create many instances. The final state also includes the resolved values for variables and parameters. Other template limits are: |
azure-video-indexer | Video Indexer Embed Widgets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-embed-widgets.md | A Cognitive Insights widget includes all visual insights that were extracted fro |Name|Definition|Description| ||||-|`widgets` | Strings separated by comma | Allows you to control the insights that you want to render.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?widgets=people,keywords` renders only people and keywords UI insights.<br/>Available options: `people`, `keywords`, `audioEffects`, `labels`, `sentiments`, `emotions`, `topics`, `keyframes`, `transcript`, `ocr`, `speakers`, `scenes`, `spokenLanguage`, `observedPeople`, `namedEntities`.| +|`widgets` | Strings separated by comma | Allows you to control the insights that you want to render.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?widgets=people,keywords` renders only people and keywords UI insights.<br/>Available options: `people`, `keywords`, `audioEffects`, `labels`, `sentiments`, `emotions`, `topics`, `keyframes`, `transcript`, `ocr`, `speakers`, `scenes`, `spokenLanguage`, `observedPeople`, `namedEntities`, `detectedObjects`.| |`controls`|Strings separated by comma|Allows you to control the controls that you want to render.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?controls=search,download` renders only search option and download button.<br/>Available options: `search`, `download`, `presets`, `language`.| |`language`|A short language code (language name)|Controls insights language.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?language=es-es` <br/>or `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?language=spanish`| |`locale` | A short language code | Controls the language of the UI. The default value is `en`. <br/>Example: `locale=de`.| You can use the Player widget to stream video by using adaptive bit rate. The Pl |`autoplay` | A Boolean value | Indicates if the player should start playing the video when loaded. The default value is `true`.<br/> Example: `autoplay=false`. | |`language`/`locale` | A language code | Controls the player language. The default value is `en-US`.<br/>Example: `language=de-DE`.| |`location` ||The `location` parameter must be included in the embedded links, see [how to get the name of your region](regions.md). If your account is in preview, the `trial` should be used for the location value. `trial` is the default value for the `location` parameter.| -|`boundingBoxes`|Array of bounding boxes options: people (faces) and observed people. <br/>Values should be separated by a comma (",").|Controls the option to set bounding boxes on/off when embedding the player.<br/>All mentioned option will be turned on.<br/><br/>Example: `boundingBoxes= observedPeople, people`<br/>Default value is `boundingBoxes= observedPeople` (only observed people bounding box are turned on).| +|`boundingBoxes`|Array of bounding boxes. Options: people (faces), observed people and detected objects. <br/>Values should be separated by a comma (",").|Controls the option to set bounding boxes on/off when embedding the player.<br/>All mentioned option will be turned on.<br/><br/>Example: `boundingBoxes=observedPeople,people,detectedObjects`<br/>Default value is `boundingBoxes=observedPeople,detectedObjects` (only observed people and detected objects bounding box are turned on).| ### Editor widget To embed a video, use the website as described below: 1. Sign in to the [Azure AI Video Indexer](https://www.videoindexer.ai/) website. 1. Select the video that you want to work with and press **Play**.-1. Select the type of widget that you want (**Cognitive Insights**, **Player**, or **Editor**). +1. Select the type of widget that you want (**Insights**, **Player**, or **Editor**). 1. Click **</> Embed**. 5. Copy the embed code (appears in **Copy the embedded code** in the **Share & Embed** dialog). 6. Add the code to your app. If you embed Azure AI Video Indexer insights with your own [Azure Media Player]( You can choose the types of insights that you want. To do this, specify them as a value to the following URL parameter that's added to the embed code that you get (from the [API](https://aka.ms/avam-dev-portal) or from the [Azure AI Video Indexer](https://www.videoindexer.ai/) website): `&widgets=<list of wanted widgets>`. -The possible values are: `people`, `keywords`, `labels`, `sentiments`, `emotions`, `topics`, `keyframes`, `transcript`, `ocr`, `speakers`, `scenes`, `namedEntities`, `logos`. +The possible values are listed [here](#cognitive-insights-widget). For example, if you want to embed a widget that contains only people and keywords insights, the iframe embed URL will look like this: |
azure-vmware | Azure Vmware Solution Platform Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md | Azure Arc-enabled VMware vSphere has a new refresh for the public preview. Now c VMware Cloud Director service for Azure VMware Solution is now available for enterprise. VMware Cloud Director service provides a multi-cloud control plane for managing multi-tenancy on infrastructure ranging from on-premises customer data centers, managed service provider facilities, and in the cloud. [Learn more](https://blogs.vmware.com/cloud/2023/08/15/cloud-director-service-ga-for-avs/) -**Stretched Clusters Generally Available** --Stretched Clusters for Azure VMware Solution is now available and provides 99.99 percent uptime for mission critical applications that require the highest availability. In times of availability zone failure, your virtual machines (VMs) and applications automatically failover to an unaffected availability zone with no application impact. [Learn more](deploy-vsan-stretched-clusters.md) - **Well-Architected Assessment Tool** Azure VMware Solution Well-Architected Assessment Tool is now available. Based upon the Microsoft Azure Well-Architected Framework, the assessment tool methodically checks how your workloads align with best practices for resiliency, security, efficiency, and cost optimization. [Learn more](https://aka.ms/avswafdocs) VMware Cloud Universal now includes Azure VMware Solution. [Learn more](https:// Customers using the cloudadmin@vsphere.local credentials with the vSphere Client now have read-only access to the Management Resource Pool that contains the management and control plane of Azure VMware Solution (vCenter Server, NSX-T Data Center, HCX Manager, SRM Manager). +## June 2023 ++**Stretched Clusters Generally Available** ++Stretched Clusters for Azure VMware Solution is now available and provides 99.99 percent uptime for mission critical applications that require the highest availability. In times of availability zone failure, your virtual machines (VMs) and applications automatically failover to an unaffected availability zone with no application impact. [Learn more](deploy-vsan-stretched-clusters.md) ++ ## May 2023 **Azure VMware Solution in Azure Gov** Documented workarounds for the vSphere stack, as per [VMSA-2021-0002](https://ww ## Post update Once complete, newer versions of VMware solution components will appear. If you notice any issues or have any questions, contact our support team by opening a support ticket.+ |
cloud-services | Cloud Services Guestos Msrc Releases | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md | The following tables show the Microsoft Security Response Center (MSRC) updates ## September 2023 Guest OS ->[!NOTE] -->The September Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the September Guest OS. This list is subject to change. | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | | | | | | |-| Rel 23-09 | [5030214] | Latest Cumulative Update(LCU) | 6.62 | Sep 12, 2023 | -| Rel 23-09 | [5030216] | Latest Cumulative Update(LCU) | 7.31 | Sep 12, 2023 | -| Rel 23-09 | [5030213] | Latest Cumulative Update(LCU) | 5.86 | Sep 12, 2023 | -| Rel 23-09 | [5029938] | .NET Framework 3.5 Security and Quality Rollup | 2.142 | Sep 12, 2023 | -| Rel 23-09 | [5029933] | .NET Framework 4.7.2 Security and Quality Rollup | 2.142 | Sep 12, 2023 | -| Rel 23-09 | [5029915] | .NET Framework 3.5 Security and Quality Rollup LKG | 4.122 | Sep 12, 2023 | -| Rel 23-09 | [5029916] | .NET Framework 4.7.2 Cumulative Update LKG | 4.122 | Sep 12, 2023 | -| Rel 23-09 | [5030160] | .NET Framework 3.5 Security and Quality Rollup LKG | 3.130 | Sep 12, 2023 | -| Rel 23-09 | [5029932] | .NET Framework 4.7.2 Cumulative Update LKG | 3.130 | Sep 12, 2023 | -| Rel 23-09 | [5029931] | .NET Framework DotNet | 6.62 | Sep 12, 2023 | -| Rel 23-09 | [5029928] | .NET Framework 4.8 Security and Quality Rollup LKG | 7.31 | Sep 12, 2023 | -| Rel 23-09 | [5030265] | Monthly Rollup | 2.142 | Sep 12, 2023 | -| Rel 23-09 | [5030278] | Monthly Rollup | 3.130 | Sep 12, 2023 | -| Rel 23-09 | [5030269] | Monthly Rollup | 4.122 | Sep 12, 2023 | -| Rel 23-09 | [5030330] | Servicing Stack Update | 3.130 | Sep 12, 2023 | -| Rel 23-09 | [5030329] | Servicing Stack Update LKG | 4.122 | Sep 12, 2023 | -| Rel 23-09 | [5030504] | Servicing Stack Update LKG | 5.86 | Sep 12, 2023 | -| Rel 23-09 | [5028264] | Servicing Stack Update LKG | 2.142 | Jul 11, 2023 | -| Rel 23-09 | [4494175] | January '20 Microcode | 5.86 | Sep 1, 2020 | -| Rel 23-09 | [4494174] | January '20 Microcode | 6.62 | Sep 1, 2020 | -| Rel 23-09 | 5030369 | Servicing Stack Update | 7.31 | | -| Rel 23-09 | 5030505 | Servicing Stack Update | 6.62 | | +| Rel 23-09 | [5030214] | Latest Cumulative Update(LCU) | [6.62] | Sep 12, 2023 | +| Rel 23-09 | [5030216] | Latest Cumulative Update(LCU) | [7.31] | Sep 12, 2023 | +| Rel 23-09 | [5030213] | Latest Cumulative Update(LCU) | [5.86] | Sep 12, 2023 | +| Rel 23-09 | [5029938] | .NET Framework 3.5 Security and Quality Rollup | [2.142] | Sep 12, 2023 | +| Rel 23-09 | [5029933] | .NET Framework 4.7.2 Security and Quality Rollup | [2.142] | Sep 12, 2023 | +| Rel 23-09 | [5029915] | .NET Framework 3.5 Security and Quality Rollup LKG | [4.122] | Sep 12, 2023 | +| Rel 23-09 | [5029916] | .NET Framework 4.7.2 Cumulative Update LKG | [4.122] | Sep 12, 2023 | +| Rel 23-09 | [5030160] | .NET Framework 3.5 Security and Quality Rollup LKG | [3.130] | Sep 12, 2023 | +| Rel 23-09 | [5029932] | .NET Framework 4.7.2 Cumulative Update LKG | [3.130] | Sep 12, 2023 | +| Rel 23-09 | [5029931] | .NET Framework DotNet | [6.62] | Sep 12, 2023 | +| Rel 23-09 | [5029928] | .NET Framework 4.8 Security and Quality Rollup LKG | [7.31] | Sep 12, 2023 | +| Rel 23-09 | [5030265] | Monthly Rollup | [2.142] | Sep 12, 2023 | +| Rel 23-09 | [5030278] | Monthly Rollup | [3.130] | Sep 12, 2023 | +| Rel 23-09 | [5030269] | Monthly Rollup | [4.122] | Sep 12, 2023 | +| Rel 23-09 | [5030330] | Servicing Stack Update | [3.130] | Sep 12, 2023 | +| Rel 23-09 | [5030329] | Servicing Stack Update LKG | [4.122] | Sep 12, 2023 | +| Rel 23-09 | [5030504] | Servicing Stack Update LKG | [5.86] | Sep 12, 2023 | +| Rel 23-09 | [5028264] | Servicing Stack Update LKG | [2.142] | Jul 11, 2023 | +| Rel 23-09 | [4494175] | January '20 Microcode | [5.86] | Sep 1, 2020 | +| Rel 23-09 | [4494174] | January '20 Microcode | [6.62] | Sep 1, 2020 | +| Rel 23-09 | 5030369 | Servicing Stack Update | [7.31] | | +| Rel 23-09 | 5030505 | Servicing Stack Update | [6.62] | | [5030214]: https://support.microsoft.com/kb/5030214 [5030216]: https://support.microsoft.com/kb/5030216 The following tables show the Microsoft Security Response Center (MSRC) updates [5030504]: https://support.microsoft.com/kb/5030504 [5028264]: https://support.microsoft.com/kb/5028264 [5030505]: https://support.microsoft.com/kb/5030505+[2.142]: ./cloud-services-guestos-update-matrix.md#family-2-releases +[3.130]: ./cloud-services-guestos-update-matrix.md#family-3-releases +[4.122]: ./cloud-services-guestos-update-matrix.md#family-4-releases +[5.86]: ./cloud-services-guestos-update-matrix.md#family-5-releases +[6.62]: ./cloud-services-guestos-update-matrix.md#family-6-releases +[7.31]: ./cloud-services-guestos-update-matrix.md#family-7-releases |
cloud-services | Cloud Services Guestos Update Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md | Unsure about how to update your Guest OS? Check [this][cloud updates] out. ## News updates +###### **September 26, 2023** +The September Guest OS has released. + ###### **August 21, 2023** The August Guest OS has released. The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-7.32_202309-01 | September 25, 2023 | Post 7.34 | | WA-GUEST-OS-7.30_202308-01 | August 21, 2023 | Post 7.32 |-| WA-GUEST-OS-7.28_202307-01 | July 27, 2023 | Post 7.31 | +|~~WA-GUEST-OS-7.28_202307-01~~| July 27, 2023 | September 25, 2023 | |~~WA-GUEST-OS-7.27_202306-02~~| July 8, 2023 | August 21, 2023 | |~~WA-GUEST-OS-7.25_202305-01~~| May 19, 2023 | July 27, 2023 | |~~WA-GUEST-OS-7.24_202304-01~~| April 27, 2023 | July 8, 2023 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-6.62_202309-01 | September 25, 2023 | Post 6.64 | | WA-GUEST-OS-6.61_202308-01 | August 21, 2023 | Post 6.63 |-| WA-GUEST-OS-6.60_202307-01 | July 27, 2023 | Post 6.62 | +|~~WA-GUEST-OS-6.60_202307-01~~| July 27, 2023 | September 25, 2023 | |~~WA-GUEST-OS-6.59_202306-02~~| July 8, 2023 | August 21, 2023 | |~~WA-GUEST-OS-6.57_202305-01~~| May 19, 2023 | July 27, 2023 | |~~WA-GUEST-OS-6.56_202304-01~~| April 27, 2023 | July 8, 2023 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-5.86_202309-01 | September 25, 2023 | Post 5.88 | | WA-GUEST-OS-5.85_202308-01 | August 21, 2023 | Post 5.87 | -| WA-GUEST-OS-5.84_202307-01 | July 27, 2023 | Post 5.86 | +|~~WA-GUEST-OS-5.84_202307-01~~| July 27, 2023 | September 25, 2023 | |~~WA-GUEST-OS-5.83_202306-02~~| July 8, 2023 | August 21, 2023 | |~~WA-GUEST-OS-5.81_202305-01~~| May 19, 2023 | July 27, 2023 | |~~WA-GUEST-OS-5.80_202304-01~~| April 27, 2023 | July 8, 2023 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-4.122_202309-01 | September 25, 2023 | Post 4.124 | | WA-GUEST-OS-4.121_202308-01 | August 21, 2023 | Post 4.123 |-| WA-GUEST-OS-4.120_202307-01 | July 27, 2023 | Post 4.122 | +|~~WA-GUEST-OS-4.120_202307-01~~| July 27, 2023 | September 25, 2023 | |~~WA-GUEST-OS-4.119_202306-02~~| July 8, 2023 | August 21, 2023 | |~~WA-GUEST-OS-4.117_202305-01~~| May 19, 2023 | July 27, 2023 | |~~WA-GUEST-OS-4.116_202304-01~~| April 27, 2023 | July 8, 2023 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-3.130_202309-01 | September 25, 2023 | Post 3.132 | | WA-GUEST-OS-3.129_202308-01 | August 21, 2023 | Post 3.131 |-| WA-GUEST-OS-3.128_202307-01 | July 27, 2023 | Post 3.130 | +|~~WA-GUEST-OS-3.128_202307-01~~| July 27, 2023 | September 25, 2023 | |~~WA-GUEST-OS-3.127_202306-02~~| July 8, 2023 | August 21, 2023 | |~~WA-GUEST-OS-3.125_202305-01~~| May 19, 2023 | July 27, 2023 | |~~WA-GUEST-OS-3.124_202304-02~~| April 27, 2023 | July 8, 2023 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-2.142_202309-01 | September 25, 2023 | Post 2.144 | | WA-GUEST-OS-2.141_202308-01 | August 21, 2023 | Post 2.143 |-| WA-GUEST-OS-2.140_202307-01 | July 27, 2023 | Post 2.142 | +|~~WA-GUEST-OS-2.140_202307-01~~| July 27, 2023 | September 25, 2023 | |~~WA-GUEST-OS-2.139_202306-02~~| July 8, 2023 | August 21, 2023 | |~~WA-GUEST-OS-2.137_202305-01~~| May 19, 2023 | July 27, 2023 | |~~WA-GUEST-OS-2.136_202304-01~~| April 27, 2023 | July 8, 2023 | |
communication-services | Privacy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/privacy.md | Any Event Grid system topic configured with Azure Communication Services is crea Your application manages the relationship between human users and Communication Service identities. When you want to delete data for a human user, you must delete data involving all Communication Service identities correlated for the user. There are two categories of Communication Service data:-- **API Data.** This data is created and managed by Communication Service APIs, a typical example being Chat messages managed through Chat APIs.+- **API Data.** This data is created and managed with Communication Service APIs, for example, Chat messages managed through Chat APIs. - **Azure Monitor Logs** This data is created by the service and managed through the Azure Monitor data platform. This data includes telemetry and metrics to help you understand your Communication Services usage. ## API data Azure Communication Services maintains a directory of phone numbers associated w ### Chat -Chat threads and messages are kept for 90 days unless explicitly deleted by the customer sooner due to their internal policies. Customers that require the option of keeping messages longer need to submit [a request to Azure Support](../../azure-portal/supportability/how-to-create-azure-support-request.md). +Azure Communication Services stores chat messages indefinitely till they are deleted. Chat thread participants can use ListMessages to view message history for a particular thread. Users that are removed from a chat thread are able to view previous message history but cannot send or receive new messages. Accidentally deleted messages are not recoverable by the system. Use [Chat APIs](/rest/api/communication/chat/chatthread) to get, list, update, and delete messages. Audio and video communication is ephemerally processed by the service and no cal Call recordings are stored temporarily in the same geography that was selected for ```Data Location``` during resource creation for 48 hours. After this the recording is deleted and you are responsible for storing the recording in a secure and compliant location. ### Email-Email message content is ephemerally stored for processing in the resource's ```Data Location``` specified by you during resource provisioning. Email message delivery logs are available in Azure Monitor Logs, where you will be in control to define the workspace to store logs. Domain sender usernames (or MailFrom) values are stored in the resource's ```Data Location``` until explicitly deleted. Recipient's email addresses that result in hard bounced messages will be temporarily retained for spam and abuse prevention and detection. +Email message content is ephemerally stored for processing in the resource's ```Data Location``` specified by you during resource provisioning. Email message delivery logs are available in Azure Monitor Logs, where you are in control to define the workspace to store logs. Domain sender usernames (or MailFrom) values are stored in the resource's ```Data Location``` until explicitly deleted. Recipient's email addresses that result in hard bounced messages are temporarily retained for spam and abuse prevention and detection. ## Azure Monitor and Log Analytics |
communication-services | Actions For Call Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/actions-for-call-control.md | No events are published for reject action. ## Redirect a call -You can choose to redirect an incoming call to one or more endpoints without answering it. Redirecting a call removes your application's ability to control the call using Call Automation. +You can choose to redirect an incoming call to another endpoint without answering it. Redirecting a call removes your application's ability to control the call using Call Automation. # [csharp](#tab/csharp) |
container-apps | Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/containers.md | For more information about configuring user-assigned identities, see [Add a user Azure Container Apps has the following limitations: -- **Privileged containers**: Azure Container Apps can't run privileged containers. If your program attempts to run a process that requires root access, the application inside the container experiences a runtime error.+- **Privileged containers**: Azure Container Apps doesn't allow privileged containers mode with host-level access. - **Operating system**: Linux-based (`linux/amd64`) container images are required. |
container-apps | Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md | Container Apps has two different [environment types](environment.md#types), whic | Environment type | Description | Supported plan types | |||| | Workload profiles | Supports user defined routes (UDR) and egress through NAT Gateway. The minimum required subnet size is `/27`. | Consumption, Dedicated |-| Consumption only | Doesn't support user defined routes (UDR) and egress through NAT Gateway. The minimum required subnet size is `/23`. | Consumption | +| Consumption only | Doesn't support user defined routes (UDR), egress through NAT Gateway, peering through a remote gateway, or other custom egress. The minimum required subnet size is `/23`. | Consumption | ## Accessibility levels |
cosmos-db | Quickstart Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-python.md | In this quickstart, you create and manage an Azure Cosmos DB for Gremlin (graph) You can also install the Python driver for Gremlin by using the `pip` command line: ```bash- pip install gremlinpython==3.4.13 + pip install gremlinpython==3.7.* ``` - [Git](https://git-scm.com/downloads). -> [!NOTE] -> This quickstart requires a graph database account created after December 20, 2017. Existing accounts will support Python once theyΓÇÖre migrated to general availability. --> [!NOTE] -> We currently recommend using gremlinpython==3.4.13 with Gremlin (Graph) API as we haven't fully tested all language-specific libraries of version 3.5.* for use with the service. - ## Create a database account Before you can create a graph database, you need to create a Gremlin (Graph) database account with Azure Cosmos DB. |
cosmos-db | Connect Using Mongoose | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/connect-using-mongoose.md | After you create the database, you'll use the name in the `COSMOSDB_DBNAME` envi 3. Install the necessary packages using one of the ```npm install``` options: - * **Mongoose**: ```npm install mongoose@5.13.15 --save``` + * **Mongoose**: ```npm install mongoose --save``` - > [!IMPORTANT] - > The Mongoose example connection below is based on Mongoose 5+, which has changed since earlier versions. Azure Cosmos DB for MongoDB is compatible with up to version `5.13.15` of Mongoose. For more information, please see the [issue discussion](https://github.com/Automattic/mongoose/issues/11072) in the Mongoose GitHub repository. + > [!NOTE] + > For more information on which version of mongoose is compatible with your API for MongoDB server version, see [Mongoose compatability](https://mongoosejs.com/docs/compatibility.html). * **Dotenv** *(if you'd like to load your secrets from an .env file)*: ```npm install dotenv --save``` |
cosmos-db | Vector Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/vector-search.md | Use vector search in Azure Cosmos DB for MongoDB vCore to seamlessly integrate y ## What is vector search? -Vector search is a method that helps you find similar items based on their data characteristics rather than by exact matches on a property field. This technique is useful in applications such as searching for similar text, finding related images, making recommendations, or even detecting anomalies. It works by taking the [vector representations](../../../ai-services/openai/concepts/understand-embeddings.md) (lists of numbers) of your data that you created by using a machine learning model by using or an embeddings API. Examples of embeddings APIs are [Azure OpenAI Embeddings](/azure/ai-services/openai/how-to/embeddings) or [Hugging Face on Azure](https://azure.microsoft.com/solutions/hugging-face-on-azure/). It then measures the distance between the data vectors and your query vector. The data vectors that are closest to your query vector are the ones that are found to be most similar semantically. +Vector search is a method that helps you find similar items based on their data characteristics rather than by exact matches on a property field. This technique is useful in applications such as searching for similar text, finding related images, making recommendations, or even detecting anomalies. It works by taking the [vector representations](../../../ai-services/openai/concepts/understand-embeddings.md) (lists of numbers) of your data that you created by using a machine learning model by using an embeddings API. Examples of embeddings APIs are [Azure OpenAI Embeddings](/azure/ai-services/openai/how-to/embeddings) or [Hugging Face on Azure](https://azure.microsoft.com/solutions/hugging-face-on-azure/). It then measures the distance between the data vectors and your query vector. The data vectors that are closest to your query vector are the ones that are found to be most similar semantically. By integrating vector search capabilities natively, you can unlock the full potential of your data in applications that are built on top of the [OpenAI API](../../../ai-services/openai/concepts/understand-embeddings.md). You can also create custom-built solutions that use vector embeddings. |
cosmos-db | Monitor Resource Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-resource-logs.md | Here, we walk through the process of creating diagnostic settings for your accou | **GremlinRequests** | Gremlin | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Gremlin. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText`, `retriedDueToRateLimiting` | | **QueryRuntimeStatistics** | NoSQL | This table details query operations executed against an API for NoSQL account. By default, the query text and its parameters are obfuscated to avoid logging personal data with full text query logging available by request. | `databasename`, `partitionkeyrangeid`, `querytext` | | **PartitionKeyStatistics** | All APIs | Logs the statistics of logical partition keys by representing the estimated storage size (KB) of the partition keys. This table is useful when troubleshooting storage skews. This PartitionKeyStatistics log is only emitted if the following conditions are true: 1. At least 1% of the documents in the physical partition have same logical partition key. 2. Out of all the keys in the physical partition, the PartitionKeyStatistics log captures the top three keys with largest storage size. </li></ul> If the previous conditions aren't met, the partition key statistics data isn't available. It's okay if the above conditions aren't met for your account, which typically indicates you have no logical partition storage skew. **Note**: The estimated size of the partition keys is calculated using a sampling approach that assumes the documents in the physical partition are roughly the same size. If the document sizes aren't uniform in the physical partition, the estimated partition key size may not be accurate. | `subscriptionId`, `regionName`, `partitionKey`, `sizeKB` |- | **PartitionKeyRUConsumption** | API for NoSQL | Logs the aggregated per-second RU/s consumption of partition keys. This table is useful for troubleshooting hot partitions. Currently, Azure Cosmos DB reports partition keys for API for NoSQL accounts only and for point read/write and stored procedure operations. | `subscriptionId`, `regionName`, `partitionKey`, `requestCharge`, `partitionKeyRangeId` | + | **PartitionKeyRUConsumption** | API for NoSQL | Logs the aggregated per-second RU/s consumption of partition keys. This table is useful for troubleshooting hot partitions. Currently, Azure Cosmos DB reports partition keys for API for NoSQL accounts only and for point read/write, query, and stored procedure operations. | `subscriptionId`, `regionName`, `partitionKey`, `requestCharge`, `partitionKeyRangeId` | | **ControlPlaneRequests** | All APIs | Logs details on control plane operations, which include, creating an account, adding or removing a region, updating account replication settings etc. | `operationName`, `httpstatusCode`, `httpMethod`, `region` | | **TableApiRequests** | API for Table | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Table. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText` | |
cosmos-db | How To Use Stored Procedures Triggers Udfs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-use-stored-procedures-triggers-udfs.md | The following code shows how to call a post-trigger using the Python SDK: ```python item = {'category': 'Personal', 'name': 'Groceries', 'description': 'Pick up strawberries', 'isComplete': False}-container.create_item(item, {'post_trigger_include': 'trgPreValidateToDoItemTimestamp'}) +container.create_item(item, pre_trigger_include='trgPreValidateToDoItemTimestamp') ``` |
cosmos-db | Performance Tips Query Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/performance-tips-query-sdk.md | The SQL SDK includes a native ServiceInterop.dll to parse and optimize queries l # [V3 .NET SDK](#tab/v3) -For queries that target a Partition Key by setting the [PartitionKey](/dotnet/api/microsoft.azure.cosmos.queryrequestoptions.partitionkey) property in `QueryRequestOptions` and contain no aggregations (including Distinct, DCount, Group By): +For queries that target a Partition Key by setting the [PartitionKey](/dotnet/api/microsoft.azure.cosmos.queryrequestoptions.partitionkey) property in `QueryRequestOptions` and contain no aggregations (including Distinct, DCount, Group By). In this example, the partition key field of `/state` is filtered on the value `Washington`. ++```csharp +using (FeedIterator<MyItem> feedIterator = container.GetItemQueryIterator<MyItem>( + "SELECT * FROM c WHERE c.city = 'Seattle' AND c.state = 'Washington'" +{ + // ... +} +``` ++Optionally, you can provide the partition key as a part of the request options object. ```cs using (FeedIterator<MyItem> feedIterator = container.GetItemQueryIterator<MyItem>( |
cosmos-db | Quickstart Spark | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-spark.md | The Azure Cosmos DB Spark 3 OLTP Connector for API for NoSQL has a complete conf az cosmosdb sql role assignment create --account-name $accountName --resource-group $resourceGroupName --scope "/" --principal-id $principalId --role-definition-id $readOnlyRoleDefinitionId ``` -1. Now that you have created an Azure Active Directory application and service principle, created a custom role, and assigned that role permissions to your Cosmos DB account, you should be able to run your notebook. +1. Now that you have created an Azure Active Directory application and service principal, created a custom role, and assigned that role permissions to your Cosmos DB account, you should be able to run your notebook. ## Migrate to Spark 3 Connector |
databox-online | Azure Stack Edge Gpu 2309 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2309-release-notes.md | The release notes are continuously updated, and as critical issues requiring a w This article applies to the **Azure Stack Edge 2309** release, which maps to software version **3.2.2380.1652**. +> [!Warning] +> In this release, you must update the packet core version to AP5GC 2308 before you update to Azure Stack Edge 2309. For detailed steps, see [Azure Private 5G Core 2308 release notes](../private-5g-core/azure-private-5g-core-release-notes-2308.md). +> If you update to Azure Stack Edge 2309 before updating to Packet Core 2308.0.1, you will experience a total system outage. In this case, you must delete and re-create the Azure Kubernetes service cluster on your Azure Stack Edge device. + ## Supported update paths To apply the 2309 update, your device must be running version 2203 or later. |
defender-for-cloud | Monitoring Components | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/monitoring-components.md | The following use cases explain how deployment of the Log Analytics agent works - **A pre-existing VM extension is present**: - When the Monitoring Agent is installed as an extension, the extension configuration allows reporting to only a single workspace. Defender for Cloud doesn't override existing connections to user workspaces. Defender for Cloud will store security data from the VM in the workspace already connected, if the "Security" or "SecurityCenterFree" solution has been installed on it. Defender for Cloud may upgrade the extension version to the latest version in this process.- - To see to which workspace the existing extension is sending data to, run the *TestCloudConnection.exe* tool to validate connectivity with Microsoft Defender for Cloud, as described in [Verify Log Analytics Agent connectivity](/services-hub/health/assessments-troubleshooting#verify-log-analytics-agent-connectivity). Alternatively, you can open Log Analytics workspaces, select a workspace, select the VM, and look at the Log Analytics agent connection. + - To see to which workspace the existing extension is sending data to, run the *TestCloudConnection.exe* tool to validate connectivity with Microsoft Defender for Cloud, as described in [Verify Log Analytics Agent connectivity](/services-hub/unified/health/assessments-troubleshooting#verify-log-analytics-agent-connectivity). Alternatively, you can open Log Analytics workspaces, select a workspace, select the VM, and look at the Log Analytics agent connection. - If you have an environment where the Log Analytics agent is installed on client workstations and reporting to an existing Log Analytics workspace, review the list of [operating systems supported by Microsoft Defender for Cloud](security-center-os-coverage.md) to make sure your operating system is supported. Learn more about [working with the Log Analytics agent](working-with-log-analytics-agent.md). |
defender-for-iot | Plan Corporate Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/best-practices/plan-corporate-monitoring.md | Use the content below to learn how to plan your overall OT monitoring with Micro ## Prerequisites -Before you start planning your OT monitoring deployment, make sure that you have an Azure subscription and an OT plan onboarded Defender for IoT. For more information, see [Add an OT plan to your Azure subscription](../getting-started.md). +Before you start planning your OT monitoring deployment, make sure that you have an Azure subscription and an OT plan onboarded Defender for IoT. For more information, see [Start a Microsoft Defender for IoT trial](../getting-started.md). This step is performed by your architecture teams. |
defender-for-iot | Configure Sensor Settings Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/configure-sensor-settings-portal.md | Selected OT sensor settings, listed below, are also available directly from the To define OT sensor settings, make sure that you have the following: -- **An Azure subscription onboarded to Defender for IoT**. If you need to, [sign up for a free account](https://azure.microsoft.com/free/), and then use the [Quickstart: Get started with Defender for IoT](getting-started.md) to onboard.+- **An Azure subscription onboarded to Defender for IoT**. If you need to, [sign up for a free account](https://azure.microsoft.com/free/), and then use the [Quickstart: Get started with Defender for IoT](getting-started.md) to start a free trial. - **Permissions**: |
defender-for-iot | Getting Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/getting-started.md | Title: Get started with OT network security monitoring - Microsoft Defender for IoT + Title: Get started with OT monitoring - Microsoft Defender for IoT description: Use this quickstart to set up a trial OT plan with Microsoft Defender for IoT and understand the next steps required to configure your network sensors. Previously updated : 06/04/2023 Last updated : 09/21/2023+#CustomerIntent: As a prospective Defender for IoT customer with OT networks, I want to understand how I can set up a trial and evaluate Defender for IoT. # Start a Microsoft Defender for IoT trial -This article describes how to set up a trial license and create an initial OT plan for Microsoft Defender for IoT. Use Defender for IoT to monitor network traffic across your OT networks. +This article describes how to set up a trial license and create an initial OT plan for Microsoft Defender for IoT, for customers who don't have any Microsoft tenant or Azure subscription at all. Use Defender for IoT to monitor network traffic across your OT networks. -A trial license supports a **Large** site size for 60 days. You might want to use this trial with a [virtual sensor](tutorial-onboarding.md) or on-premises sensors to monitor traffic, analyze data, generate alerts, understand network risks and vulnerabilities, and more. +A trial supports a **Large** site size with up to 1000 devices, and lasts for 60 days. You might want to use this trial with a [virtual sensor](tutorial-onboarding.md) or on-premises sensors to monitor traffic, analyze data, generate alerts, understand network risks and vulnerabilities, and more. ## Prerequisites -Before you start, make sure that you have: +Before you start, all you need is an email address that will be used as the contact for your new Microsoft Tenant. -- A Microsoft 365 tenant, with access to the [Microsoft 365 admin center](https://portal.office.com/AdminPortal/Home#/catalog) as Global or Billing admin.-- For more information, see [Buy or remove Microsoft 365 licenses for a subscription](/microsoft-365/commerce/licenses/buy-licenses) and [About admin roles in the Microsoft 365 admin center](/microsoft-365/admin/add-users/about-admin-roles). --- An Azure account. If you don't already have an Azure account, you can [create your free Azure account today](https://azure.microsoft.com/free/).--- Access to the Azure portal as a [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner). For more information, see [Azure user roles for OT and Enterprise IoT monitoring with Defender for IoT](roles-azure.md).+You'll also need to enter credit card details for your new Azure subscription, although you won't be charged until you switch from the **Free Trial** to the **Pay-As-You-Go** plan. ## Add a trial license This procedure describes how to add a trial license for Defender for IoT to your **To add a trial license**: -1. Go to the [Microsoft 365 admin center](https://portal.office.com/AdminPortal/Home#/catalog) **Billing > Purchase services**. If you don't have this option, select **Marketplace** instead. +1. In a browser, open the [Microsoft Defender for IoT - OT Site License (1000 max devices per site) Trial wizard](https://signup.microsoft.com/get-started/signup?OfferId=11c457e2-ac0a-430d-8500-88c99927ff9f&ali=1&products=11c457e2-ac0a-430d-8500-88c99927ff9f). ++1. In the **Email** box, enter the email address you want to associate with the trial license, and select **Next**. -1. Search for **Microsoft Defender for IoT** and locate the **Microsoft Defender for IoT - OT Site License (1000 max devices per site) Trial** item. +1. In the **Tell us about yourself** page, enter your details, and then select **Next**. -1. Select **Details** > **Start free trial** > **Try now** to start the trial. +1. Select whether you want the confirmation message to be sent to you via SMS or a phone call. Verify your phone number, and then select **Send verification code**. -For more information, see the [Microsoft 365 admin center help](/microsoft-365/admin/). +1. After receiving the code, enter it in the **Enter your verification code** box. ++1. In the **How you'll sign in** page, enter a username and password and select **Next**. ++1. In the **Confirmation details** page, note your order number and username, and then select **Start using Microsoft Defender for IoT - OT Site License (1000 max devices per site) Trial** button to continue. We recommend that you copy your full username to the clipboard as you'll need it to access the Azure portal. ++Use the Microsoft 365 admin center manage your users, billing details, and more. For more information, see the [Microsoft 365 admin center help](/microsoft-365/admin/). ## Add an OT plan -This procedure describes how to add an OT plan for Defender for IoT in the Azure portal, based on the trial license you'd obtained from the [Microsoft 365 admin center](#add-a-trial-license). +This procedure describes how to add an OT plan for Defender for IoT in the Azure portal, based on your [new trial license](#add-a-trial-license). **To add an OT plan in Defender for IoT**: -1. In [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started), select **Plans and pricing** > **Add plan**. --1. In the **Plan settings** pane, select the Azure subscription where you want to add a plan. +1. Open [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal, select **Plans and pricing**, where you're prompted to create a new subscription. - You can only add a single subscription, and you'll need a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role for the selected subscription. +1. Select **Go to subscriptions** to create a new subscription on the [Azure **Subscriptions** page](https://portal.azure.com/?quickstart=True#view/Microsoft_Azure_Billing/SubscriptionsBlade). Make sure to select the **Free Trial** option. - > [!TIP] - > If your subscription isn't listed, check your account details and confirm your permissions with the subscription owner. Also make sure that you have the right subscriptions selected in your Azure settings > **Directories + subscriptions** page. +1. Back in the Defender for IoT's **Plans and pricing** page, select **Add plan**. In the **Plan settings** pane, select your new subscription. The **Price plan** value is updated automatically to read **Microsoft 365**, reflecting your Microsoft 365 license. -1. Select **Next** and review the details for your licensed site. The details listed on the **Review and purchase** pane reflect any licenses you've obtained from the Microsoft 365 admin center. +1. Select **Next** and review the details for your licensed site. The details listed on the **Review and purchase** pane reflect your trial license. 1. Select the terms and conditions, and then select **Save**. |
defender-for-iot | Ot Deploy Path | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/ot-deploy-path.md | While teams and job titles differ across different organizations, all Defender f Before you start planning your OT monitoring deployment, make sure that you have an Azure subscription and an OT plan onboarded Defender for IoT. -For more information, see [Add an OT plan to your Azure subscription](../getting-started.md). +For more information, see [Start a Microsoft Defender for IoT trial](../getting-started.md). ## Planning and preparing |
digital-twins | How To Use 3D Scenes Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-3d-scenes-studio.md | Azure Digital Twins [3D Scenes Studio (preview)](https://explorer.digitaltwins.a ## Prerequisites -To use 3D Scenes Studio, you'll need the following resources: +To use 3D Scenes Studio, you'll need the following resources. + * An Azure Digital Twins instance. For instructions, see [Set up an instance and authentication](how-to-set-up-instance-cli.md). * Obtain *Azure Digital Twins Data Owner* or *Azure Digital Twins Data Reader* access to the instance. For instructions, see [Set up user access permissions](how-to-set-up-instance-cli.md#set-up-user-access-permissions). * Take note of the *host name* of your instance to use later. To use 3D Scenes Studio, you'll need the following resources: * Take note of the *URL* of your storage account to use later. * A private container in the storage account. For instructions, see [Create a container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container). * Take note of the *name* of your storage container to use later.-* *Storage Blob Data Owner* or *Storage Blob Data Contributor* and also at least *Reader* roles are needed to access your storage resources. You can grant required roles at either the storage account level or the container level. For instructions and more information about permissions to Azure storage, see [Assign an Azure role](../storage/blobs/assign-azure-role-data-access.md?tabs=portal#assign-an-azure-role). +* Permissions for your storage resources, including: + * At least *Reader* control plane access + * A data access role of *Storage Blob Data Owner* or *Storage Blob Data Contributor* ++ You can grant required roles at either the storage account level or the container level. For instructions and more information about permissions to Azure storage, see [Assign an Azure role](../storage/blobs/assign-azure-role-data-access.md?tabs=portal#assign-an-azure-role). * Configure CORS for your storage account (see details in the following sub-section). ### Configure CORS |
dns | Dns Private Resolver Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-overview.md | The following restrictions hold with respect to virtual networks: ### Subnet restrictions Subnets used for DNS resolver have the following limitations:-- A subnet must be a minimum of /28 address space or a maximum of /24 address space.+- A subnet must be a minimum of /28 address space or a maximum of /24 address space. A /28 subnet is sufficient to accomodate current endpoint limits. A subnet size of /27 to /24 can provide flexibility if these limits change. - A subnet can't be shared between multiple DNS resolver endpoints. A single subnet can only be used by a single DNS resolver endpoint. - All IP configurations for a DNS resolver inbound endpoint must reference the same subnet. Spanning multiple subnets in the IP configuration for a single DNS resolver inbound endpoint isn't allowed. - The subnet used for a DNS resolver inbound endpoint must be within the virtual network referenced by the parent DNS resolver. |
event-grid | Custom Event Quickstart Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-event-quickstart-portal.md | Title: 'Send custom events to web endpoint - Event Grid, Azure portal' description: 'Quickstart: Use Azure Event Grid and Azure portal to publish a custom topic, and subscribe to events for that topic. The events are handled by a web application.' Previously updated : 07/21/2022 Last updated : 09/25/2023 Before you create a subscription for the custom topic, create an endpoint for th 1. On the **Review + create** page, select **Create**. 1. The deployment may take a few minutes to complete. Select Alerts (bell icon) in the portal, and then select **Go to resource group**. - ![Alert - navigate to resource group.](./media/blob-event-quickstart-portal/navigate-resource-group.png) + :::image type="content" source="./media/blob-event-quickstart-portal/navigate-resource-group.png" alt-text="Screenshot showing the successful deployment message with a link to navigate to the resource group."::: 4. On the **Resource group** page, in the list of resources, select the web app that you created. You also see the App Service plan and the storage account in this list. - ![Select web site.](./media/blob-event-quickstart-portal/resource-group-resources.png) + :::image type="content" source="./media/blob-event-quickstart-portal/resource-group-resources.png" alt-text="Screenshot that shows the Resource Group page with the deployed resources."::: 5. On the **App Service** page for your web app, select the URL to navigate to the web site. The URL should be in this format: `https://<your-site-name>.azurewebsites.net`. - ![Navigate to web site.](./media/blob-event-quickstart-portal/web-site.png) -+ :::image type="content" source="./media/blob-event-quickstart-portal/web-site.png" alt-text="Screenshot that shows the App Service page with the link to the site highlighted."::: 6. Confirm that you see the site but no events have been posted to it yet. - ![View new site.](./media/blob-event-quickstart-portal/view-site.png) + :::image type="content" source="./media/blob-event-quickstart-portal/view-site.png" alt-text="Screenshot that shows the Event Grid Viewer sample app."::: ## Subscribe to custom topic You subscribe to an Event Grid topic to tell Event Grid which events you want to 3. View your web app again, and notice that a subscription validation event has been sent to it. Select the eye icon to expand the event data. Event Grid sends the validation event so the endpoint can verify that it wants to receive event data. The web app includes code to validate the subscription. - ![View subscription event](./media/custom-event-quickstart-portal/view-subscription-event.png) + :::image type="content" source="./media/custom-event-quickstart-portal/view-subscription-event.png" alt-text="Screenshot of the Event Grid Viewer app with the Subscription Validated event."::: ## Send an event to your topic The first example uses Azure CLI. It gets the URL and key for the custom topic, :::image type="content" source="./media/custom-event-quickstart-portal/select-cloud-shell.png" alt-text="Select Cloud Shell icon"::: 1. Select **Bash** in the top-left corner of the Cloud Shell window. - ![Cloud Shell - Bash](./media/custom-event-quickstart-portal/cloud-shell-bash.png) + :::image type="content" source="./media/custom-event-quickstart-portal/cloud-shell-bash.png" alt-text="Screenshot that shows the Cloud Shell with Bash selected in the top-left corner."::: 1. Run the following command to get the **endpoint** for the topic: After you copy and paste the command, update the **topic name** and **resource group name** before you run the command. You publish sample events to this topic endpoint. ```azurecli If you plan to continue working with this event, don't clean up the resources cr 1. Select **Resource Groups** on the left menu. If you don't see it on the left menu, select **All Services** on the left menu, and select **Resource Groups**. - ![Resource groups](./media/custom-event-quickstart-portal/delete-resource-groups.png) + :::image type="content" source="./media/custom-event-quickstart-portal/delete-resource-groups.png" alt-text="Screenshot that shows the Resource Groups page." ::: 1. Select the resource group to launch the **Resource Group** page. 1. Select **Delete resource group** on the toolbar. 1. Confirm deletion by entering the name of the resource group, and select **Delete**. |
event-grid | Enable Diagnostic Logs Topic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/enable-diagnostic-logs-topic.md | Title: Azure Event Grid - Enable diagnostic logs for Event Grid resources description: This article provides step-by-step instructions on how to enable diagnostic logs for Event Grid resources. Previously updated : 11/11/2021 Last updated : 09/25/2023 # Enable diagnostic logs for Event Grid resources This article provides step-by-step instructions for enabling diagnostic settings 1. Sign in to the [Azure portal](https://portal.azure.com). 2. Navigate to the Event Grid topic for which you want to enable diagnostic log settings. 1. In the search bar at the top, search for **Event Grid topics**.- ![Search for custom topics](./media/enable-diagnostic-logs-topic/search-custom-topics.png) - 2. Select the **topic** from the list for which you want to configure diagnostic settings. -3. Select **Diagnostic settings** under **Monitoring** in the left menu. -4. On the **Diagnostic settings** page, select **Add New Diagnostic Setting**. - ![Add diagnostic setting button](./media/enable-diagnostic-logs-topic/diagnostic-settings-add.png) -5. Specify a **name** for the diagnostic setting. -6. Select the **allLogs** option in the **Logs** section. - ![Select the failures](./media/enable-diagnostic-logs-topic/log-failures.png) -7. Enable one or more of the capture destinations for the logs, and then configure them by selecting a previous created capture resource. + + :::image type="content" source="./media/enable-diagnostic-logs-topic/search-custom-topics.png" alt-text="Screenshot that shows the Azure portal with Event Grid topics in the search box."::: + 1. Select the **topic** from the list for which you want to configure diagnostic settings. +1. Select **Diagnostic settings** under **Monitoring** in the left menu. +1. On the **Diagnostic settings** page, select **Add New Diagnostic Setting**. ++ :::image type="content" source="./media/enable-diagnostic-logs-topic/diagnostic-settings-add.png" alt-text="Screenshots showing the Diagnostic settings page of a custom topic."::: +1. Specify a **name** for the diagnostic setting. +1. Select the **allLogs** option in the **Logs** section. ++ :::image type="content" source="./media/enable-diagnostic-logs-topic/log-failures.png" alt-text="Screenshot that shows the Diagnostic setting page with All logs selected."::: +1. Enable one or more of the capture destinations for the logs, and then configure them by selecting a previous created capture resource. + - If you select **Send to Log Analytics**, select the Log Analytics workspace. + + :::image type="content" source="./media/enable-diagnostic-logs-topic/send-log-analytics.png" alt-text="Screenshot that shows the Diagnostic settings page with Send to Log Analytics selected."::: - If you select **Archive to a storage account**, select **Storage account - Configure**, and then select the storage account in your Azure subscription.- ![Screenshot that shows the "Diagnostic settings" page with "Archive to an Azure storage account" checked and a storage account selected.](./media/enable-diagnostic-logs-topic/archive-storage.png) + + :::image type="content" source="./media/enable-diagnostic-logs-topic/archive-storage.png" alt-text="Screenshot that shows the Diagnostic settings page with Archive to an Azure storage account checked and a storage account selected."::: - If you select **Stream to an event hub**, select **Event hub - Configure**, and then select the Event Hubs namespace, event hub, and the access policy.+ ![Screenshot that shows the "Diagnostic settings" page with "Stream to an event hub" checked.](./media/enable-diagnostic-logs-topic/archive-event-hub.png)- - If you select **Send to Log Analytics**, select the Log Analytics workspace. - ![Screenshot that shows the "Diagnostic settings" page with "Send to Log Analytics" checked.](./media/enable-diagnostic-logs-topic/send-log-analytics.png) -8. Select **Save**. Then, select **X** in the right-corner to close the page. -9. Now, back on the **Diagnostic settings** page, confirm that you see a new entry in the **Diagnostics Settings** table. +1. Select **Save**. Then, select **X** in the right-corner to close the page. +1. Now, back on the **Diagnostic settings** page, confirm that you see a new entry in the **Diagnostics Settings** table. + ![Screenshot that shows the "Diagnostic settings" page with a new entry highlighted in the "Diagnostics settings" table.](./media/enable-diagnostic-logs-topic/diagnostic-setting-list.png) You can also enable collection of all metrics for the topic. Then, it creates a diagnostic setting on the topic to send diagnostic informatio Event Grid can publish audit traces for data plane operations. To enable the feature, select **audit** in the **Category groups** section or select **DataPlaneRequests** in the **Categories** section. The audit trace can be used to ensure that data access is allowed only for authorized purposes. It collects information about security control such as resource name, operation type, network access, level, region and more. For more information about how to enable the diagnostic setting, see [Diagnostic logs in Event Grid topics and Event domains](enable-diagnostic-logs-topic.md#enable-diagnostic-logs-for-event-grid-topics-and-domains).-![Select the audit traces](./media/enable-diagnostic-logs-topic/enable-audit-logs.png) +![Screenshot that shows the Diagnostic settings page with Audit selected.](./media/enable-diagnostic-logs-topic/enable-audit-logs.png) > [!IMPORTANT] > For more information about the `DataPlaneRequests` schema, see [Diagnostic logs](monitor-push-reference.md). |
event-grid | How To Filter Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/how-to-filter-events.md | Title: How to filter events for Azure Event Grid description: This article shows how to filter events (by event type, by subject, by operators and data, etc.) when creating an Event Grid subscription. Previously updated : 08/11/2021 Last updated : 09/25/2023 # Filter events for Event Grid This article shows how to filter events when creating an Event Grid subscription ## Filter by event type -When creating an Event Grid subscription, you can specify which [event types](event-schema.md) to send to the endpoint. The examples in this section create event subscriptions for a resource group but limit the events that are sent to `Microsoft.Resources.ResourceWriteFailure` and `Microsoft.Resources.ResourceWriteSuccess`. If you need more flexibility when filtering events by event types, see Filter by advanced operators and data fields. +When creating an Event Grid subscription, you can specify which [event types](event-schema.md) to send to the endpoint. The examples in this section create event subscriptions for a resource group but limit the events that are sent to `Microsoft.Resources.ResourceWriteFailure` and `Microsoft.Resources.ResourceWriteSuccess`. If you need more flexibility when filtering events by event types, see [Filter by operators and data](#filter-by-operators-and-data). ### Azure PowerShell For PowerShell, use the `-IncludedEventType` parameter when creating the subscription. az eventgrid event-subscription create \ ### Azure portal -1. On the **Event Subscription** page, switch to the **Filters** tab. -1. Select **Add Event Type** next to **Filter to Event Types**. +While creating an event subscription to a **system topic**, use the drop-down list to select the event types as shown in the following image. - :::image type="content" source="./media/how-to-filter-events/add-event-type-button.png" alt-text="Screenshot of the Event Subscription page with Add Event Type button selected."::: -1. Type the event type and press ENTER. In the following example, the event type is `Microsoft.Resources.ResourceWriteSuccess`. - :::image type="content" source="./media/how-to-filter-events/sample-event-type.png" alt-text="Screenshot of the Event Subscription page with a sample event type."::: +For an existing subscription to a system topic, use the **Filters** tab of the **Event Subscription** page as shown in the following image. +++You can specify filters while creating a **custom topic** by selecting **Add Event Type** link as shown in the following image. +++To specify a filter for an existing subscription to a custom topic, use the **Filters** tab in the **Event Subscription** page. + ### Azure Resource Manager template For a Resource Manager template, use the `includedEventTypes` property. For a Resource Manager template, use the `includedEventTypes` property. ## Filter by subject -You can filter events by the subject in the event data. You can specify a value to match for the beginning or end of the subject. If you need more flexibility when filtering events by subject, see Filter by advanced operators and data fields. +You can filter events by the subject in the event data. You can specify a value to match for the beginning or end of the subject. If you need more flexibility when filtering events by subject, see [Filter by operators and data](#filter-by-operators-and-data). In the following PowerShell example, you create an event subscription that filters by the beginning of the subject. You use the `-SubjectBeginsWith` parameter to limit events to ones for a specific resource. You pass the resource ID of a network security group. az eventgrid event-subscription create \ ### Azure portal +For an existing event subscription: + 1. On the **Event Subscription** page, select **Enable subject filtering**. 1. Enter values for one or more of the following fields: **Subject begins with** and **Subject ends with**. In the following options both options are selected. :::image type="content" source="./media/how-to-filter-events/subject-filter-example.png" alt-text="Screenshot of Event Subscription page with subject filtering example."::: 1. Select **Case-sensitive subject matching** option if you want the subject of the event to match the case of the filters specified. +When creating an event subscription, use the **Filters** tab on the creation wizard. +++ ### Azure Resource Manager template In the following Resource Manager template example, you create an event subscription that filters by the beginning of the subject. You use the `subjectBeginsWith` property to limit events to ones for a specific resource. You pass the resource ID of a network security group. |
event-grid | Webhook Event Delivery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/webhook-event-delivery.md | Title: WebHook event delivery description: This article describes WebHook event delivery and endpoint validation when using webhooks. Previously updated : 07/06/2022 Last updated : 09/25/2023 # Webhook event delivery -Webhooks are one of the many ways to receive events from Azure Event Grid. When a new event is ready, Event Grid service POSTs an HTTP request to the configured endpoint with the event in the request body. +Webhooks are one of the many ways to receive events from Azure Event Grid. When a new event is ready, Event Grid service POSTs an HTTP request to the configured endpoint with the event information in the request body. Like many other services that support webhooks, Event Grid requires you to prove ownership of your Webhook endpoint before it starts delivering events to that endpoint. This requirement prevents a malicious user from flooding your endpoint with events. ## Endpoint validation with Event Grid events-When you use any of the three Azure services listed below, the Azure infrastructure automatically handles this validation: +When you use any of the following three Azure services, the Azure infrastructure automatically handles this validation: - Azure Logic Apps with [Event Grid Connector](/connectors/azureeventgrid/) - Azure Automation via [webhook](../event-grid/ensure-tags-exists-on-new-virtual-machines.md) If you're using any other type of endpoint, such as an HTTP trigger based Azure Event Grid supports a manual validation handshake. If you're creating an event subscription with an SDK or tool that uses API version 2018-05-01-preview or later, Event Grid sends a `validationUrl` property in the data portion of the subscription validation event. To complete the handshake, find that URL in the event data and do a GET request to it. You can use either a REST client or your web browser. - The provided URL is valid for **5 minutes**. During that time, the provisioning state of the event subscription is `AwaitingManualAction`. If you don't complete the manual validation within 5 minutes, the provisioning state is set to `Failed`. You'll have to create the event subscription again before starting the manual validation. + The provided URL is valid for **5 minutes**. During that time, the provisioning state of the event subscription is `AwaitingManualAction`. If you don't complete the manual validation within 5 minutes, the provisioning state is set to `Failed`. You have to create the event subscription again before starting the manual validation. This authentication mechanism also requires the webhook endpoint to return an HTTP status code of 200 so that it knows that the POST for the validation event was accepted before it can be put in the manual validation mode. In other words, if the endpoint returns 200 but doesn't return back a validation response synchronously, the mode is transitioned to the manual validation mode. If there's a GET on the validation URL within 5 minutes, the validation handshake is considered to be successful. To prove endpoint ownership, echo back the validation code in the `validationRes And, follow one of these steps: -- You must return an **HTTP 200 OK** response status code. **HTTP 202 Accepted** isn't recognized as a valid Event Grid subscription validation response. The HTTP request must complete within 30 seconds. If the operation doesn't finish within 30 seconds, then the operation will be canceled and it may be reattempted after 5 seconds. If all the attempts fail, then it will be treated as validation handshake error.+- You must return an **HTTP 200 OK** response status code. **HTTP 202 Accepted** isn't recognized as a valid Event Grid subscription validation response. The HTTP request must complete within 30 seconds. If the operation doesn't finish within 30 seconds, then the operation will be canceled and it may be reattempted after 5 seconds. If all the attempts fail, then it's treated as validation handshake error. - The fact that your application is prepared to handle and return the validation code indicates that you created the event subscription and expected to receive the event. Imagine the scenario that there is no handshake validation supported and a hacker gets to know your application URL. The hacker can create a topic and an event subscription with your application's URL, and start conducting a DoS attack to your application by sending a lot of events. The handshake validation prevents that to happen. + The fact that your application is prepared to handle and return the validation code indicates that you created the event subscription and expected to receive the event. Imagine the scenario that there's no handshake validation supported and a hacker gets to know your application URL. The hacker can create a topic and an event subscription with your application's URL, and start conducting a DoS attack to your application by sending a lot of events. The handshake validation prevents that to happen. - Imagine that you already have the validation implemented in your app because you created your own event subscriptions. Even if a hacker creates an event subscription with your app URL, your correct implementation of the validation request event will check for the `aeg-subscription-name` header in the request to ascertain that it's an event subscription that you recognize. + Imagine that you already have the validation implemented in your app because you created your own event subscriptions. Even if a hacker creates an event subscription with your app URL, your correct implementation of the validation request event checks for the `aeg-subscription-name` header in the request to ascertain that it's an event subscription that you recognize. Even after that correct handshake implementation, a hacker can flood your app (it already validated the event subscription) by replicating a request that seems to be coming from Event Grid. To prevent that, you must secure your webhook with AAD authentication. For more information, see [Deliver events to Azure Active Directory protected endpoints](secure-webhook-delivery.md). -- Or, you can manually validate the subscription by sending a GET request to the validation URL. The event subscription stays in a pending state until validated. The validation Url uses **port 553**. If your firewall rules block port 553, you'll need update rules for a successful manual handshake.+- Or, you can manually validate the subscription by sending a GET request to the validation URL. The event subscription stays in a pending state until validated. The validation Url uses **port 553**. If your firewall rules block port 553, you need to update rules for a successful manual handshake. - In your validation of the subscription validation event, if you identify that it isn't an event subscription for which you are expecting events, you wouldn't return a 200 response or no response at all. Hence, the validation will fail. + In your validation of the subscription validation event, if you identify that it isn't an event subscription for which you're expecting events, you wouldn't return a 200 response or no response at all. Hence, the validation fails. For an example of handling the subscription validation handshake, see a [C# sample](https://github.com/Azure-Samples/event-grid-dotnet-publish-consume-events/blob/master/EventGridConsumer/EventGridConsumer/Function1.cs). When a topic is created, an incoming event schema is defined. And, when a subscr | | Custom input schema | Yes | ## Next steps-See the following article to learn how to troubleshoot event subscription validations: --[Troubleshoot event subscription validations](troubleshoot-subscription-validation.md) +See the following article to learn how to troubleshoot event subscription validations: [Troubleshoot event subscription validations](troubleshoot-subscription-validation.md). |
expressroute | About Fastpath | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/about-fastpath.md | FastPath Private endpoint/Private Link connectivity is supported for the followi - Azure Storage - Third Party Private Link Services -Connections associated to ExpressRoute partner circuits aren't eligible for this preview. Both IPv4 and IPv6 connectivity is supported. - > [!NOTE]-> Private Link pricing will not apply to traffic sent over ExpressRoute FastPath. For more information about pricing, check out the [Private Link pricing page](https://azure.microsoft.com/pricing/details/private-link/). -> +> * Enabling FastPath Private endpoint/Link support for limited GA scenarios may take upwards of 2 weeks to complete. Please plan your deployment(s) in advance. +> * Connections associated to ExpressRoute partner circuits aren't eligible for this preview. Both IPv4 and IPv6 connectivity is supported. +> * Private Link pricing will not apply to traffic sent over ExpressRoute FastPath. For more information about pricing, check out the [Private Link pricing page](https://azure.microsoft.com/pricing/details/private-link/). +> * FastPath supports a max of 100Gbps connectivity to a single Availability Zone (Az). For more information about supported scenarios and to enroll in the limited GA offering, send an email to **exrpm@microsoft.com** with the following information: - Azure subscription ID |
expressroute | Expressroute Howto Linkvnet Arm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-linkvnet-arm.md | Set-AzVirtualNetworkGatewayConnection -VirtualNetworkGatewayConnection $connecti > > [!NOTE]-> FastPath and Private Link feature onboarding requires time to be enabled after request. You can expect about two weeks of delay until request is completed, so we encourage you to plan your deployment in advance with these timelines into consideration. +> Enabling FastPath Private Link support for limited GA scenarios may take upwards of 2 weeks to complete. Please plan your deployment(s) in advance. > ## Enroll in ExpressRoute FastPath features (preview) |
expressroute | How To Configure Traffic Collector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/how-to-configure-traffic-collector.md | Title: Configure Traffic Collector for ExpressRoute Direct + Title: Configure Traffic Collector for ExpressRoute Direct (Preview) description: This article shows you how to create an ExpressRoute Traffic Collector resource and import logs into a Log Analytics workspace. -# Configure Traffic Collector for ExpressRoute Direct +# Configure Traffic Collector for ExpressRoute Direct (Preview) This article helps you deploy an ExpressRoute Traffic Collector using the Azure portal. You learn how to add and remove an ExpressRoute Traffic Collector, associate it to an ExpressRoute Direct circuit and Log Analytics workspace. Once the ExpressRoute Traffic Collector is deployed, sampled flow logs get imported into a Log Analytics workspace. For more information, see [About ExpressRoute Traffic Collector](traffic-collector.md). Once all circuits have been removed from the ExpressRoute Traffic Collector, sel ## Next step -- Learn about [ExpressRoute Traffic Collector metrics](expressroute-monitoring-metrics-alerts.md#expressroute-traffic-collector-metrics) to monitor your ExpressRoute Traffic Collector resource.+- Learn about [ExpressRoute Traffic Collector metrics](expressroute-monitoring-metrics-alerts.md#expressroute-traffic-collector-metrics) to monitor your ExpressRoute Traffic Collector resource. |
expressroute | Traffic Collector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/traffic-collector.md | Title: Azure ExpressRoute Traffic Collector + Title: Azure ExpressRoute Traffic Collector (Preview) description: Learn about ExpressRoute Traffic Collector and the different use cases where this feature is helpful. -# Azure ExpressRoute Traffic Collector +# Azure ExpressRoute Traffic Collector (Preview) ExpressRoute Traffic Collector enables sampling of network flows sent over your ExpressRoute Direct circuits. Flow logs get sent to a [Log Analytics workspace](../azure-monitor/logs/log-analytics-overview.md) where you can create your own log queries for further analysis. You can also export the data to any visualization tool or SIEM (Security Information and Event Management) of your choice. Flow logs can be enabled for both private peering and Microsoft peering with ExpressRoute Traffic Collector. |
healthcare-apis | Events Consume Logic Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-consume-logic-apps.md | Title: Consume events with Logic Apps - Azure Health Data Services -description: Learn how to consume FHIR events with Logic Apps. + Title: Consume FHIR events with Logic Apps - Azure Health Data Services +description: Learn how to consume FHIR events with Logic Apps to enable automation workflows. The options in this example are: - Method is "Get" - URL is `"concat('https://', triggerBody()?['subject'], '/_history/', triggerBody()?['dataVersion'])"`.-- Authentication type is "Managed Identity".+- Authentication type is **Managed Identity**. - Audience is `"concat('https://', triggerBody()?['data']['resourceFhirAccount'])"`. ### Allow FHIR Reader access to your Logic App When you've specified the first four steps, add the role assignment by Managed i ### Add a condition -After you have given FHIR Reader access to your app, go back to the Logic App workflow Designer. Then add a condition to determine whether the event is one you want to process. Select the **+** below HTTP to "Choose an operation". On the right, search for the word "condition". Select on **Built-in** to display the Control icon. Next select **Actions** and choose **Condition**. +After you have given FHIR Reader access to your app, go back to the Logic App workflow Designer. Then add a condition to determine whether the event is one you want to process. Select the **+** below HTTP to "Choose an operation". On the right, search for the word **Condition**. Select on **Built-in** to display the Control icon. Next select **Actions** and choose **Condition**. When the condition is ready, you can specify what actions happen if the condition is true or false. |
healthcare-apis | Events Disable Delete Workspace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-disable-delete-workspace.md | Title: How to disable events and delete Azure Health Data Services workspaces - Azure Health Data Services -description: Learn how to disable events and delete Azure Health Data Services workspaces. + Title: How to disable events and delete events enabled workspaces - Azure Health Data Services +description: Learn how to disable events and delete events enabled workspaces. Previously updated : 07/11/2023 Last updated : 09/26/2023 -# How to disable events and delete Azure Health Data Services workspaces +# How to disable events and delete event enabled workspaces > [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. -In this article, learn how to disable events and delete Azure Health Data Services workspaces. +In this article, learn how to disable events and delete events enabled workspaces. ## Disable events To disable events from sending event messages for a single **Event Subscription**, the **Event Subscription** must be deleted. -1. Select the **Event Subscription** to be deleted. In this example, we select an Event Subscription named **fhir-events**. +1. Select the **Event Subscription** to be deleted. In this example, we're selecting an Event Subscription named **fhir-events**. - :::image type="content" source="media/disable-delete-workspaces/events-select-subscription.png" alt-text="Screenshot of Events subscriptions and select event subscription to be deleted." lightbox="media/disable-delete-workspaces/events-select-subscription.png"::: + :::image type="content" source="media/disable-delete-workspaces/select-event-subscription.png" alt-text="Screenshot of Events Subscriptions and select event subscription to be deleted." lightbox="media/disable-delete-workspaces/select-event-subscription.png"::: 2. Select **Delete** and confirm the **Event Subscription** deletion. - :::image type="content" source="media/disable-delete-workspaces/events-select-subscription-delete.png" alt-text="Screenshot of events subscriptions and select delete and confirm the event subscription to be deleted." lightbox="media/disable-delete-workspaces/events-select-subscription-delete.png"::: + :::image type="content" source="media/disable-delete-workspaces/select-subscription-delete.png" alt-text="Screenshot of events subscriptions and select delete and confirm the event subscription to be deleted." lightbox="media/disable-delete-workspaces/select-subscription-delete.png"::: -3. To completely disable events, delete all **Event Subscriptions** so that no **Event Subscriptions** remain. +3. If you have multiple **Event Subscriptions**, follow the steps to delete the **Event Subscriptions** so that no **Event Subscriptions** remain. - :::image type="content" source="media/disable-delete-workspaces/events-disable-no-subscriptions.png" alt-text="Screenshot of Events subscriptions and delete all event subscriptions to disable events." lightbox="media/disable-delete-workspaces/events-disable-no-subscriptions.png"::: + :::image type="content" source="media/disable-delete-workspaces/no-event-subscriptions-found.png" alt-text="Screenshot of Event Subscriptions and delete all event subscriptions to disable events." lightbox="media/disable-delete-workspaces/no-event-subscriptions-found.png"::: > [!NOTE]-> The FHIR service will automatically go into an **Updating** status to disable events when a full delete of **Event Subscriptions** is executed. The FHIR service will remain online while the operation is completing. +> The FHIR service will automatically go into an **Updating** status to disable events when a full delete of **Event Subscriptions** is executed. The FHIR service will remain online while the operation is completing, however, you won't be able to make any further configuration changes to the FHIR service until the updating has completed. -## Delete workspaces +## Delete events enabled workspaces -To avoid errors and successfully delete workspaces, follow these steps and in this specific order: +To avoid errors and successfully delete events enabled workspaces, follow these steps and in this specific order: -1. Delete all workspace associated child resources - for example: DICOM services, FHIR services, and MedTech services. -2. Delete all workspace associated Event Subscriptions. +1. Delete all workspace associated child resources (for example: DICOM services, FHIR services, and MedTech services). +2. Delete all workspace associated **Event Subscriptions**. 3. Delete workspace. ## Next steps -In this article, you learned how to disable events and delete Azure Health Data Services workspaces. +In this article, you learned how to disable events and delete events enabled workspaces. To learn about how to troubleshoot events, see |
machine-learning | How To Administrate Data Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-administrate-data-authentication.md | -> The information in this article is intended for Azure administrators who are creating the infrastructure required for an Azure Machine Learning solution. --In general, data access from studio involves the following checks: --* Who is accessing? - - There are multiple different types of authentication depending on the storage type. For example, account key, token, service principal, managed identity, and user identity. - - If authentication is made using a user identity, then it's important to know *which* user is trying to access storage. For more information on authenticating a _user_, see [authentication for Azure Machine Learning](how-to-setup-authentication.md). For more information on service-level authentication, see [authentication between Azure Machine Learning and other services](how-to-identity-based-service-authentication.md). -* Do they have permission? - - Are the credentials correct? If so, does the service principal, managed identity, etc., have the necessary permissions on the storage? Permissions are granted using Azure role-based access controls (Azure RBAC). - - [Reader](../role-based-access-control/built-in-roles.md#reader) of the storage account reads metadata of the storage. - - [Storage Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) reads data within a blob container. - - [Contributor](../role-based-access-control/built-in-roles.md#contributor) allows write access to a storage account. - - More roles may be required depending on the type of storage. -* Where is access from? +> This article is intended for Azure administrators who want to create the required infrastructure for an Azure Machine Learning solution. ++In general, data access from studio involves these checks: ++* Which user wants to access the resources? + - Depending on the storage type, different types of authentication are available, for example + - account key + - token + - service principal + - managed identity + - user identity + - For authentication based on a user identity, you must know *which* specific user tried to access the storage resource. For more information about _user_ authentication, see [authentication for Azure Machine Learning](how-to-setup-authentication.md). For more information about service-level authentication, see [authentication between Azure Machine Learning and other services](how-to-identity-based-service-authentication.md). +* Does this user have permission? + - Does the user have the correct credentials? If yes, does the service principal, managed identity, etc., have the necessary permissions for that storage resource? Permissions are granted using Azure role-based access controls (Azure RBAC). + - The storage account [Reader](../role-based-access-control/built-in-roles.md#reader) reads the storage metadata. + - The [Storage Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) reads data within a blob container. + - The [Contributor](../role-based-access-control/built-in-roles.md#contributor) allows write access to a storage account. + - More roles may be required, depending on the type of storage. +* Where does the access come from? - User: Is the client IP address in the VNet/subnet range?- - Workspace: Is the workspace public or does it have a private endpoint in a VNet/subnet? + - Workspace: Is the workspace public, or does it have a private endpoint in a VNet/subnet? - Storage: Does the storage allow public access, or does it restrict access through a service endpoint or a private endpoint?-* What operation is being performed? - - Create, read, update, and delete (CRUD) operations on a data store/dataset are handled by Azure Machine Learning. - - Archive operation on data assets in the Studio requires the following RBAC operation: Microsoft.MachineLearningServices/workspaces/datasets/registered/delete - - Data Access calls (such as preview or schema) go to the underlying storage and need extra permissions. -* Where is this operation being run; compute resources in your Azure subscription or resources hosted in a Microsoft subscription? +* What operation will be performed? + - Azure Machine Learning handles create, read, update, and delete (CRUD) operations on a data store/dataset. + - Archive operations on data assets in the Studio require this RBAC operation: `Microsoft.MachineLearningServices/workspaces/datasets/registered/delete` + - Data Access calls (for example, preview or schema) go to the underlying storage, and need extra permissions. +* Will this operation run in your Azure subscription compute resources, or resources hosted in a Microsoft subscription? - All calls to dataset and datastore services (except the "Generate Profile" option) use resources hosted in a __Microsoft subscription__ to run the operations.- - Jobs, including the "Generate Profile" option for datasets, run on a compute resource in __your subscription__, and access the data from there. So the compute identity needs permission to the storage rather than the identity of the user submitting the job. + - Jobs, including the dataset "Generate Profile" option, run on a compute resource in __your subscription__, and access the data from that location. The compute identity needs permission to the storage resource, instead of the identity of the user that submitted the job. -The following diagram shows the general flow of a data access call. In this example, a user is trying to make a data access call through a machine learning workspace, without using any compute resource. +This diagram shows the general flow of a data access call. Here, a user tries to make a data access call through a machine learning workspace, without using a compute resource. :::image type="content" source="./media/concept-network-data-access/data-access-flow.svg" alt-text="Diagram of the logic flow when accessing data."::: ## Scenarios and identities -The following table lists what identities should be used for specific scenarios: +This table lists the identities to use for specific scenarios: | Scenario | Use workspace</br>Managed Service Identity (MSI) | Identity to use | |--|--|--| The following table lists what identities should be used for specific scenarios: | Access from Job | Yes/No | Compute MSI | | Access from Notebook | Yes/No | User's identity | --Data access is complex and it's important to recognize that there are many pieces to it. For example, accessing data from Azure Machine Learning studio is different than using the SDK. When using the SDK on your local development environment, you're directly accessing data in the cloud. When using studio, you aren't always directly accessing the data store from your client. Studio relies on the workspace to access data on your behalf. +Data access is complex and it involves many pieces. For example, data access from Azure Machine Learning studio is different compared to use of the SDK for data access. When you use the SDK in your local development environment, you directly access data in the cloud. When you use studio, you don't always directly access the data store from your client. Studio relies on the workspace to access data on your behalf. > [!TIP]-> If you need to access data from outside Azure Machine Learning, such as using Azure Storage Explorer, *user* identity is probably what is used. Consult the documentation for the tool or service you are using for specific information. For more information on how Azure Machine Learning works with data, see [Setup authentication between Azure Machine Learning and other services](how-to-identity-based-service-authentication.md). +> To access data from outside Azure Machine Learning, for example with Azure Storage Explorer, that access probably relies on the *user* identity. For specific information, review the documentation for the tool or service you're using. For more information about how Azure Machine Learning works with data, see [Setup authentication between Azure Machine Learning and other services](how-to-identity-based-service-authentication.md). ## Azure Storage Account -When using an Azure Storage Account from Azure Machine Learning studio, you must add the managed identity of the workspace to the following Azure RBAC roles for the storage account: +When you use an Azure Storage Account from Azure Machine Learning studio, you must add the managed identity of the workspace to these Azure RBAC roles for the storage account: * [Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader)-* If the storage account uses a private endpoint to connect to the VNet, you must grant the managed identity the [Reader](../role-based-access-control/built-in-roles.md#reader) role for the storage account private endpoint. +* If the storage account uses a private endpoint to connect to the VNet, you must grant the [Reader](../role-based-access-control/built-in-roles.md#reader) role for the storage account private endpoint to the managed identity. For more information, see [Use Azure Machine Learning studio in an Azure Virtual Network](how-to-enable-studio-virtual-network.md). -See the following sections for information on limitations when using Azure Storage Account with your workspace in a VNet. +The following sections explain the limitations of using an Azure Storage Account, with your workspace, in a VNet. -### Secure communication with Azure Storage Account +### Secure communication with Azure Storage Account -To secure communication between Azure Machine Learning and Azure Storage Accounts, configure storage to [Grant access to trusted Azure services](../storage/common/storage-network-security.md#grant-access-to-trusted-azure-services). +To secure communication between Azure Machine Learning and Azure Storage Accounts, configure the storage to [Grant access to trusted Azure services](../storage/common/storage-network-security.md#grant-access-to-trusted-azure-services). ### Azure Storage firewall -When an Azure Storage account is behind a virtual network, the storage firewall can normally be used to allow your client to directly connect over the internet. However, when using studio it isn't your client that connects to the storage account; it's the Azure Machine Learning service that makes the request. The IP address of the service isn't documented and changes frequently. __Enabling the storage firewall will not allow studio to access the storage account in a VNet configuration__. +When an Azure Storage account is located behind a virtual network, the storage firewall can normally be used to allow your client to directly connect over the internet. However, when using studio, your client doesn't connect to the storage account. The Azure Machine Learning service that makes the request connects to the storage account. The IP address of the service isn't documented, and it changes frequently. __Enabling the storage firewall will not allow studio to access the storage account in a VNet configuration__. ### Azure Storage endpoint type -When the workspace uses a private endpoint and the storage account is also in the VNet, there are extra validation requirements when using studio: +When the workspace uses a private endpoint, and the storage account is also in the VNet, extra validation requirements arise when using studio: -* If the storage account uses a __service endpoint__, the workspace private endpoint and storage service endpoint must be in the same subnet of the VNet. -* If the storage account uses a __private endpoint__, the workspace private endpoint and storage private endpoint must be in the same VNet. In this case, they can be in different subnets. +* If the storage account uses a __service endpoint__, the workspace private endpoint and storage service endpoint must be located in the same subnet of the VNet. +* If the storage account uses a __private endpoint__, the workspace private endpoint and storage private endpoint must be in located in the same VNet. In this case, they can be in different subnets. ## Azure Data Lake Storage Gen1 -When using Azure Data Lake Storage Gen1 as a datastore, you can only use POSIX-style access control lists. You can assign the workspace's managed identity access to resources just like any other security principal. For more information, see [Access control in Azure Data Lake Storage Gen1](../data-lake-store/data-lake-store-access-control.md). +When using Azure Data Lake Storage Gen1 as a datastore, you can only use POSIX-style access control lists. You can assign the workspace's managed identity access to resources, just like any other security principal. For more information, see [Access control in Azure Data Lake Storage Gen1](../data-lake-store/data-lake-store-access-control.md). ## Azure Data Lake Storage Gen2 When using Azure Data Lake Storage Gen2 as a datastore, you can use both Azure RBAC and POSIX-style access control lists (ACLs) to control data access inside of a virtual network. -__To use Azure RBAC__, follow the steps in the [Datastore: Azure Storage Account](how-to-enable-studio-virtual-network.md#datastore-azure-storage-account) section of the 'Use Azure Machine Learning studio in an Azure Virtual Network' article. Data Lake Storage Gen2 is based on Azure Storage, so the same steps apply when using Azure RBAC. +__To use Azure RBAC__, follow the steps described in this [Datastore: Azure Storage Account](how-to-enable-studio-virtual-network.md#datastore-azure-storage-account) article section. Data Lake Storage Gen2 is based on Azure Storage, so the same steps apply when using Azure RBAC. __To use ACLs__, the managed identity of the workspace can be assigned access just like any other security principal. For more information, see [Access control lists on files and directories](../storage/blobs/data-lake-storage-access-control.md#access-control-lists-on-files-and-directories). - ## Next steps -For information on enabling studio in a network, see [Use Azure Machine Learning studio in an Azure Virtual Network](how-to-enable-studio-virtual-network.md). +For information about enabling studio in a network, see [Use Azure Machine Learning studio in an Azure Virtual Network](how-to-enable-studio-virtual-network.md). |
machine-learning | How To Create Vector Index | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-vector-index.md | After you create a vector index, you can add it to a prompt flow from the prompt An example of a plain string you can input in this case would be: `How to use SDK V2?'. Here is an example of an embedding as an input: `${embed_the_question.output}`. Passing a plain string will only work when the Vector Index is getting used on the workspace which created it. +## Supported File Types ++Supported file types for creating a vector index job: `.txt`, `.md`, `.html`, `.htm`, `.py`, `.pdf`, `.ppt`, `.pptx`, `.doc`, `.docx`, `.xls`, `.xlsx`. Any other file types will be ignored during creation. + ## Next steps [Get started with RAG by using a prompt flow sample (preview)](how-to-use-pipelines-prompt-flow.md) |
machine-learning | How To Manage Workspace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace.md | As your needs change or requirements for automation increase you can also manage * An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today. * If using the Python SDK: 1. [Install the SDK v2](https://aka.ms/sdk-v2-install).+ 1. Install azure-identity: `pip install azure-identity`. If in a notebook cell, use `%pip install azure-identity`. 1. Provide your subscription details [!notebook-python[](~/azureml-examples-main/sdk/python/resources/workspace/workspace.ipynb?name=subscription_id)] |
machine-learning | How To Managed Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-managed-network.md | Before following the steps in this article, make sure you have the following pre > The creation of the managed VNet is deferred until a compute resource is created or provisioning is manually started. When allowing automatic creation, it can take around __30 minutes__ to create the first compute resource as it is also provisioning the network. For more information, see [Manually provision the network](#manually-provision-a-managed-vnet). > [!IMPORTANT]-> __If you plan to submit serverless spark jobs__, you must manually start provisioning. For more information, see the [configure for serverless spark jobs](#configure-for-serverless-spark-jobs) section. +> __If you plan to submit serverless Spark jobs__, you must manually start provisioning. For more information, see the [configure for serverless Spark jobs](#configure-for-serverless-spark-jobs) section. # [Azure CLI](#tab/azure-cli) To configure a managed VNet that allows internet outbound communications, use th * __Resource type__: The type of the Azure resource. * __Resource name__: The name of the Azure resource. * __Sub Resource__: The sub resource of the Azure resource type.- * __Spark enabled__: Select this option if you want to enable serverless spark jobs for the workspace. This option is only available if the resource type is Azure Storage. + * __Spark enabled__: Select this option if you want to enable serverless Spark jobs for the workspace. This option is only available if the resource type is Azure Storage. :::image type="content" source="./media/how-to-managed-network/outbound-rule-private-endpoint.png" alt-text="Screenshot of adding an outbound rule for a private endpoint." lightbox="./media/how-to-managed-network/outbound-rule-private-endpoint.png"::: To configure a managed VNet that allows internet outbound communications, use th > The managed VNet is automatically provisioned when you create a compute resource. When allowing automatic creation, it can take around __30 minutes__ to create the first compute resource as it is also provisioning the network. If you configured FQDN outbound rules, the first FQDN rule adds around __10 minutes__ to the provisioning time. For more information, see [Manually provision the network](#manually-provision-a-managed-vnet). > [!IMPORTANT]-> __If you plan to submit serverless spark jobs__, you must manually start provisioning. For more information, see the [configure for serverless spark jobs](#configure-for-serverless-spark-jobs) section. +> __If you plan to submit serverless Spark jobs__, you must manually start provisioning. For more information, see the [configure for serverless Spark jobs](#configure-for-serverless-spark-jobs) section. # [Azure CLI](#tab/azure-cli) To configure a managed VNet that allows only approved outbound communications, u * __Resource type__: The type of the Azure resource. * __Resource name__: The name of the Azure resource. * __Sub Resource__: The sub resource of the Azure resource type.- * __Spark enabled__: Select this option if you want to enable serverless spark jobs for the workspace. This option is only available if the resource type is Azure Storage. + * __Spark enabled__: Select this option if you want to enable serverless Spark jobs for the workspace. This option is only available if the resource type is Azure Storage. > [!TIP] > Azure Machine Learning managed VNet doesn't support creating a private endpoint to all Azure resource types. For a list of supported resources, see the [Private endpoints](#private-endpoints) section. To configure a managed VNet that allows only approved outbound communications, u -## Configure for serverless spark jobs +## Configure for serverless Spark jobs > [!TIP]-> The steps in this section are only needed if you plan to submit __serverless spark jobs__. If you aren't going to be submitting serverless spark jobs, you can skip this section. +> The steps in this section are only needed if you plan to submit __serverless Spark jobs__. If you aren't going to be submitting serverless Spark jobs, you can skip this section. -To enable the [serverless spark jobs](how-to-submit-spark-jobs.md) for the managed VNet, you must perform the following actions: +To enable the [serverless Spark jobs](how-to-submit-spark-jobs.md) for the managed VNet, you must perform the following actions: * Configure a managed VNet for the workspace and add an outbound private endpoint for the Azure Storage Account.-* After you configure the managed VNet, provision it and flag it to allow spark jobs. +* After you configure the managed VNet, provision it and flag it to allow Spark jobs. 1. Configure an outbound private endpoint. To enable the [serverless spark jobs](how-to-submit-spark-jobs.md) for the manag # [Azure CLI](#tab/azure-cli) - The following example shows how to provision a managed VNet for serverless spark jobs by using the `--include-spark` parameter. + The following example shows how to provision a managed VNet for serverless Spark jobs by using the `--include-spark` parameter. ```azurecli az ml workspace provision-network -g my_resource_group -n my_workspace_name --include-spark To enable the [serverless spark jobs](how-to-submit-spark-jobs.md) for the manag # [Python SDK](#tab/python) - The following example shows how to provision a managed VNet for serverless spark jobs: + The following example shows how to provision a managed VNet for serverless Spark jobs: ```python # Connect to a workspace named "myworkspace" ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace_name="myworkspace") - # whether to provision spark vnet as well + # whether to provision Spark vnet as well include_spark = True provision_network_result = ml_client.workspaces.begin_provision_network(workspace_name=ws_name, include_spark=include_spark).result() To enable the [serverless spark jobs](how-to-submit-spark-jobs.md) for the manag # [Azure portal](#tab/portal) - Use the __Azure CLI__ or __Python SDK__ tabs to learn how to manually provision the managed VNet with serverless spark support. + Use the __Azure CLI__ or __Python SDK__ tabs to learn how to manually provision the managed VNet with serverless Spark support. To reduce the wait time when someone attempts to create the first compute, you c The following example shows how to provision a managed VNet. > [!TIP]-> If you plan to submit serverless spark jobs, add the `--include-spark` parameter. +> If you plan to submit serverless Spark jobs, add the `--include-spark` parameter. ```azurecli az ml workspace provision-network -g my_resource_group -n my_workspace_name The following example shows how to provision a managed VNet: # Connect to a workspace named "myworkspace" ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace_name="myworkspace") -# whether to provision spark vnet as well +# whether to provision Spark vnet as well include_spark = True provision_network_result = ml_client.workspaces.begin_provision_network(workspace_name=ws_name, include_spark=include_spark).result() provision_network_result = ml_client.workspaces.begin_provision_network(workspac # [Azure portal](#tab/portal) -Use the __Azure CLI__ or __Python SDK__ tabs to learn how to manually provision the managed VNet with serverless spark support. +Use the __Azure CLI__ or __Python SDK__ tabs to learn how to manually provision the managed VNet with serverless Spark support. When you create a private endpoint, you provide the _resource type_ and _subreso When you create a private endpoint for Azure Machine Learning dependency resources, such as Azure Storage, Azure Container Registry, and Azure Key Vault, the resource can be in a different Azure subscription. However, the resource must be in the same tenant as the Azure Machine Learning workspace. > [!IMPORTANT]-> When configuring private endpoints for an Azure Machine Learning managed VNet, the private endpoints are only created when created when the first _compute is created_ or when managed VNet provisioning is forced. For more information on forcing the managed VNet provisioning, see [Configure for serverless spark jobs](#manually-provision-a-managed-vnet). +> When configuring private endpoints for an Azure Machine Learning managed VNet, the private endpoints are only created when created when the first _compute is created_ or when managed VNet provisioning is forced. For more information on forcing the managed VNet provisioning, see [Configure for serverless Spark jobs](#manually-provision-a-managed-vnet). ## Pricing |
machine-learning | Tutorial Cloud Workstation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-cloud-workstation.md | In order for your script to run, you need to be working in an environment config Files you upload are stored in an Azure file share, and these files are mounted to each compute instance and shared within the workspace. - 1. Download this conda environment file, [*workstation_env.yml*](https://azuremlexampledata.blob.core.windows.net/datasets/workstation_env.yml) to your computer. + 1. Download this conda environment file, [*workstation_env.yml*](https://github.com/Azure/azureml-examples/blob/main/tutorials/get-started-notebooks/workstation_env.yml) to your computer by using the **Download raw file** button at the top right. + <!-- use this link instead once it works again [*workstation_env.yml*](https://azuremlexampledata.blob.core.windows.net/datasets/workstation_env.yml) to your computer. --> 1. Select **Add files**, then select **Upload files** to upload it to your workspace. :::image type="content" source="media/tutorial-cloud-workstation/upload-files.png" alt-text="Screenshot shows how to upload files to your workspace."::: In order for your script to run, you need to be working in an environment config 1. Select **workstation_env.yml** file you downloaded. 1. Select **Upload**. - You'll see the *workstation_env.yml* file under your username folder in the **Files** tab. Select this file to preview it, and see what dependencies it specifies. -- :::image type="content" source="media/tutorial-cloud-workstation/view-yml.png" alt-text="Screenshot shows the yml file that you uploaded."::: + You'll see the *workstation_env.yml* file under your username folder in the **Files** tab. Select this file to preview it, and see what dependencies it specifies. You'll see contents like this: + ::: code language="yml" source="~/azureml-examples-main//tutorials/get-started-notebooks/workstation_env.yml" ::: * **Create a kernel.** This code uses `sklearn` for training and MLflow for logging the metrics. [!notebook-python[] (~/azureml-examples-main/tutorials/get-started-notebooks/cloud-workstation.ipynb?name=gbt)] + > [!NOTE] + > You can ignore the mlflow warnings. You'll still get all the results you need tracked. + ## Iterate Now that you have model results, you may want to change something and try again. For example, try a different classifier technique: [!notebook-python[] (~/azureml-examples-main/tutorials/get-started-notebooks/cloud-workstation.ipynb?name=ada)] +> [!NOTE] +> You can ignore the mlflow warnings. You'll still get all the results you need tracked. + ## Examine results Now that you've tried two different models, use the results tracked by `MLFfow` to decide which model is better. You can reference metrics like accuracy, or other indicators that matter most for your scenarios. You can dive into these results in more detail by looking at the jobs created by `MLflow`. For now, you're running this code on your compute instance, which is your Azure python train.py ``` +> [!NOTE] +> You can ignore the mlflow warnings. You'll still get all the metric and images from autologging. + ## Examine script results Go back to **Jobs** to see the results of your training script. Keep in mind that the training data changes with each split, so the results differ between runs as well. |
mysql | Migrate Single Flexible Mysql Import Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/migrate-single-flexible-mysql-import-cli.md | To open the Cloud Shell, select **Try it** from the upper right corner of a code If you prefer to install and use the CLI locally, this tutorial requires Azure CLI version 2.52.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). -## Prerequisites +## Setup You must sign in to your account using the [az sign-in](/cli/azure/reference-index#az-login) command. Note the **id** property, which refers to your Azure account's **Subscription ID**. Select the specific subscription in which the source Azure Database for MySQL - az account set --subscription <subscription id> ``` -## Limitations +## Limitations and pre-requisites - The source Azure Database for MySQL - Single Server and the target Azure Database for MySQL - Flexible Server must be in the same subscription, resource group, region, and on the same MySQL version. MySQL Import across subscriptions, resource groups, regions, and versions isn't possible. - MySQL versions supported by Azure MySQL Import are 5.7 and 8.0. If you are on a different major MySQL version on Single Server, make sure to upgrade your version on your Single Server instance before triggering the import command. - MySQL Import for Single Servers with Legacy Storage architecture (General Purpose storage V1) isn't supported. You must upgrade your storage to the latest storage architecture (General Purpose storage V2) to trigger a MySQL Import operation. Find your storage type and upgrade steps by following directions [here](../single-server/concepts-pricing-tiers.md#how-can-i-determine-which-storage-type-my-server-is-running-on). - MySQL Import to an existing Azure MySQL Flexible Server isn't supported. The CLI command initiates the import of a new Azure MySQL Flexible Server. - If the flexible target server is provisioned as non-HA (High Availability disabled) when updating the CLI command parameters, it can later be switched to Same-Zone HA but not Zone-Redundant HA.-- MySQL Import doesn't currently support Azure Database for MySQL Single Servers with Infrastructure Double Encryption.+- For CMK enabled Single Server instances, MySQL Import command requires you to provide mandatory input parameters for enabling CMK on target Flexible Server. +- If the Single Server instance has ' Infrastructure Double Encryption' enabled, enabling Customer Managed Key (CMK) on target Flexible Server instance is recommended to support similar functionality. You can choose to enable CMK on target server with MySQL Import CLI input parameters or post migration as well. - Only instance-level import is supported. No option to import selected databases within an instance is provided. - Below items should be copied from source to target by the user post MySQL Import operation: - Firewall rules az network private-dns zone create -g testGroup -n myserver.private.contoso.com az mysql flexible-server import create --data-source-type "mysql_single" --data-source "test-single-server" --resource-group "test-rg" --name "test-flexible-server" --high-availability ZoneRedundant --zone 1 --standby-zone 3 --vnet "myVnet" --subnet "mySubnet" --private-dns-zone "myserver.private.contoso.com" ``` -The following example takes in the data source information for Single Server named 'test-single-server' with CUstomer Managed Key (CMK) enabled and target Flexible Server information, creates a target Flexible Server named `test-flexible-server` and performs an import from source to target. For CMK enabled Single Server instances, MySQL Import command requires you to provide mandatory input parameters for enabling CMK : --key keyIdentifierOfTestKey --identity testIdentity. +The following example takes in the data source information for Single Server named 'test-single-server' with Customer Managed Key (CMK) enabled and target Flexible Server information, creates a target Flexible Server named `test-flexible-server` and performs an import from source to target. For CMK enabled Single Server instances, MySQL Import command requires you to provide mandatory input parameters for enabling CMK : --key keyIdentifierOfTestKey --identity testIdentity. ```azurecli-interactive # create keyvault iops | 500 | Number of IOPS to be allocated for the target Azure Database for My - The MySQL version, region, subscription and resource for the target flexible server must be equal to that of the source single server. - The storage size for target flexible server should be equal to or greater than on the source single server.+- If the Single Server instance has ' Infrastructure Double Encryption' enabled, enabling Customer Managed Key (CMK) on target Flexible Server instance is recommended to support similar functionality. You can choose to enable CMK on target server with MySQL Import CLI input parameters or post migration as well. ## How long does MySQL Import take to migrate my Single Server instance? |
nat-gateway | Nat Availability Zones | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/nat-availability-zones.md | A zonal promise for zone isolation scenarios exists when a virtual machine insta *Figure 3: Zonal isolation by creating zonal stacks with the same zone NAT gateway, public IPs, and virtual machines provides the best method of ensuring zone resiliency against outages.* -Failure of outbound connectivity due to a zone outage is isolated to the specific zone affected. The outage won't affect the other zonal stacks where other NAT gateways are deployed with their own subnets and zonal public IPs. +> [!NOTE] +> Creating zonal stacks for each availability zone within a region is the most effective method for building zone-resiliency against outages for NAT gateway. However, ths configuration only safeguards the remaining availability zones where the outage did **not** take place. With this configuration, failure of outbound connectivity from a zone outage is isolated to the specific zone affected. The outage won't affect the other zonal stacks where other NAT gateways are deployed with their own subnets and zonal public IPs. + -Creating zonal stacks for each availability zone within a region is the most effective method for building zone-resiliency against outages for NAT gateway. ### Integration of inbound with a standard load balancer |
nat-gateway | Nat Gateway Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/nat-gateway-resource.md | NAT Gateway interacts with IP and IP transport headers of UDP and TCP flows. NAT ## TCP reset -A TCP reset packet is sent when a NAT gateway detects traffic on a connection flow that doesn't exist. TCP reset is uni-directional for a NAT gateway. +A TCP reset packet is sent when a NAT gateway detects traffic on a connection flow that doesn't exist. The TCP reset packet indicates to the receiving endpoint that the release of the connection flow has occurred and any future communication on this same TCP connection will fail. TCP reset is uni-directional for a NAT gateway. The connection flow may not exist if: -* The connection flow idle timeout was reached and caused the connection to close earlier. +* The idle timeout was reached after a period of inactivity on the connection flow and the connection is silently dropped. -* The sender, either from the Azure network side or from the public internet side, sent traffic after the connection closed. +* The sender, either from the Azure network side or from the public internet side, sent traffic after the connection dropped. -NAT Gateway silently drops a connection flow when the idle timeout of a flow is reached. A TCP reset packet is sent only upon detecting traffic on the closed connection flow. This operation means a TCP reset packet may not be sent right away. +A TCP reset packet is sent only upon detecting traffic on the dropped connection flow. This operation means a TCP reset packet may not be sent right away after a connection flow has dropped. The system sends a TCP reset packet in response to detecting traffic on a nonexisting connection flow, regardless of whether the traffic originates from the Azure network side or the public internet side. The system sends a TCP reset packet in response to detecting traffic on a nonexi A NAT gateway provides a configurable idle timeout range of 4 minutes to 120 minutes for TCP protocols. UDP protocols have a nonconfigurable idle timeout of 4 minutes. -When a connection goes idle, the NAT gateway holds onto SNAT ports until the connection idle times out. Because long idle timeout timers can unnecessarily increase the likelihood of SNAT port exhaustion, it isn't recommended to increase the TCP idle timeout duration to longer than the default time of 4 minutes. The idle timer doesn't affect a flow that never goes idle. +When a connection goes idle, the NAT gateway holds onto the SNAT port until the connection idle times out. Because long idle timeout timers can unnecessarily increase the likelihood of SNAT port exhaustion, it isn't recommended to increase the TCP idle timeout duration to longer than the default time of 4 minutes. The idle timer doesn't affect a flow that never goes idle. -TCP keepalives can be used to provide a pattern of refreshing long idle connections and endpoint liveness detection. For more information, see these [.NET examples] (/dotnet/api/system.net.servicepoint.settcpkeepalive?view=net-7.0). TCP keepalives appear as duplicate ACKs to the endpoints, are low overhead, and invisible to the application layer. +TCP keepalives can be used to provide a pattern of refreshing long idle connections and endpoint liveness detection. For more information, see these [.NET examples] (/dotnet/api/system.net.servicepoint.settcpkeepalive). TCP keepalives appear as duplicate ACKs to the endpoints, are low overhead, and invisible to the application layer. UDP idle timeout timers aren't configurable, UDP keepalives should be used to ensure that the idle timeout value isn't reached, and that the connection is maintained. Unlike TCP connections, a UDP keepalive enabled on one side of the connection only applies to traffic flow in one direction. UDP keepalives must be enabled on both sides of the traffic flow in order to keep the traffic flow alive. The total number of connections that a NAT gateway can support at any given time ## Limitations -- Basic load balancers and basic public IP addresses aren't compatible with NAT. Use standard SKU load balancers and public IPs instead.+- Basic load balancers and basic public IP addresses aren't compatible with NAT gateway. Use standard SKU load balancers and public IPs instead. - To upgrade a load balancer from basic to standard, see [Upgrade Azure Public Load Balancer](../load-balancer/upgrade-basic-standard.md) |
nat-gateway | Tutorial Hub Spoke Nat Firewall | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/tutorial-hub-spoke-nat-firewall.md | NAT gateway can be integrated with Azure Firewall by configuring NAT gateway dir :::image type="content" source="./media/tutorial-hub-spoke-nat-firewall/resources-diagram.png" alt-text="Diagram of Azure resources created in tutorial." lightbox="./media/tutorial-hub-spoke-nat-firewall/resources-diagram.png"::: +>[!NOTE] +>Azure NAT Gateway is not currently supported in secured virtual hub network (vWAN) architectures. You must deploy using a hub virtual network architecture as described in this tutorial. For more information about Azure Firewall architecture options, see [What are the Azure Firewall Manager architecture options?](/azure/firewall-manager/vhubs-and-vnets). + In this tutorial, you learn how to: > [!div class="checklist"] |
network-watcher | Diagnose Vm Network Routing Problem | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-routing-problem.md | Title: 'Tutorial: Diagnose a VM network routing problem - Azure portal' description: In this tutorial, you learn how to diagnose a virtual machine network routing problem using the next hop capability of Azure Network Watcher.- Previously updated : 02/28/2023- -# Customer intent: I want to diagnose virtual machine (VM) network routing problem that prevents communication to different destinations. Last updated : 09/26/2023++# CustomerIntent: As an Azure administrator, I want to diagnose virtual machine (VM) network routing problem that prevents it from communicating with the internet. # Tutorial: Diagnose a virtual machine network routing problem using the Azure portal -When you deploy a virtual machine (VM), Azure creates several [system default routes](/azure/virtual-network/virtual-networks-udr-overview#system-routes?toc=%2Fazure%2Fnetwork-watcher%2Ftoc.json&tabs=json) for it. You can create [custom routes](/azure/virtual-network/virtual-networks-udr-overview#custom-routes?toc=%2Fazure%2Fnetwork-watcher%2Ftoc.json&tabs=json) to override some of Azure's system routes. Sometimes, a custom route can result in a VM not being able to communicate with the intended destination. You can use Azure Network Watcher [next hop](network-watcher-next-hop-overview.md) capability to troubleshoot and diagnose the VM routing problem that's preventing it from correctly communicating with other resources. +In this tutorial, You use Azure Network Watcher [next hop](network-watcher-next-hop-overview.md) tool to troubleshoot and diagnose a VM routing problem that's preventing it from correctly communicating with other resources. Next hop shows you that the routing problem is caused by a [custom route](../virtual-network/virtual-networks-udr-overview.md#custom-routes). In this tutorial, you learn how to: In this tutorial, you learn how to: > * Create a custom route > * Diagnose a routing problem -If you prefer, you can diagnose a virtual machine network routing problem using the [Azure CLI](diagnose-vm-network-routing-problem-cli.md) or [Azure PowerShell](diagnose-vm-network-routing-problem-powershell.md) tutorials. +If you prefer, you can diagnose a virtual machine network routing problem using the [Azure CLI](diagnose-vm-network-routing-problem-cli.md) or [Azure PowerShell](diagnose-vm-network-routing-problem-powershell.md) versions of the tutorial. ## Prerequisites - An Azure account with an active subscription. If you don't have one, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. -## Sign in to Azure --Sign in to the [Azure portal](https://portal.azure.com). - ## Create a virtual network In this section, you create a virtual network. -1. In the search box at the top of the portal, enter *virtual networks*. Select **Virtual networks** in the search results. +1. Sign in to the [Azure portal](https://portal.azure.com). ++1. In the search box at the top of the portal, enter ***virtual networks***. Select **Virtual networks** in the search results. :::image type="content" source="./media/diagnose-vm-network-routing-problem/virtual-network-azure-portal.png" alt-text="Screenshot shows searching for virtual networks in the Azure portal."::: In this section, you create a virtual network. | | | | **Project Details** | | | Subscription | Select your Azure subscription. |- | Resource Group | Select **Create new**. </br> Enter *myResourceGroup* in **Name**. </br> Select **OK**. | + | Resource Group | Select **Create new**. </br> Enter ***myResourceGroup*** in **Name**. </br> Select **OK**. | | **Instance details** | |- | Name | Enter *myVNet*. | + | Name | Enter ***myVNet***. | | Region | Select **East US**. | 1. Select the **IP Addresses** tab, or select **Next: IP Addresses** button at the bottom of the page. In this section, you create a virtual network. | Setting | Value | | | |- | IPv4 address space | Enter *10.0.0.0/16*. | - | Subnet name | Enter *mySubnet*. | - | Subnet address range | Enter *10.0.0.0/24*. | + | IPv4 address space | Enter ***10.0.0.0/16***. | + | Subnet name | Enter ***mySubnet***. | + | Subnet address range | Enter ***10.0.0.0/24***. | 1. Select the **Security** tab, or select the **Next: Security** button at the bottom of the page. In this section, you create a virtual network. | Setting | Value | | | |- | Bastion name | Enter *myBastionHost*. | - | AzureBastionSubnet address space | Enter *10.0.3.0/24*. | - | Public IP Address | Select **Create new**. </br> Enter *myBastionIP* for **Name**. </br> Select **OK**. | + | Bastion name | Enter ***myBastionHost***. | + | AzureBastionSubnet address space | Enter ***10.0.3.0/24***. | + | Public IP Address | Select **Create new**. </br> Enter ***myBastionIP*** for **Name**. </br> Select **OK**. | 1. Select the **Review + create** tab or select the **Review + create** button. In this section, you create two virtual machines: **myVM** and **myNVA**. You us ### Create first virtual machine -1. In the search box at the top of the portal, enter *virtual machines*. Select **Virtual machines** in the search results. +1. In the search box at the top of the portal, enter ***virtual machines***. Select **Virtual machines** in the search results. 2. Select **+ Create** and then select **Azure virtual machine**. In this section, you create two virtual machines: **myVM** and **myNVA**. You us | Subscription | Select your Azure subscription. | | Resource Group | Select **myResourceGroup**. | | **Instance details** | |- | Virtual machine name | Enter *myVM*. | + | Virtual machine name | Enter ***myVM***. | | Region | Select **(US) East US**. | | Availability Options | Select **No infrastructure redundancy required**. | | Security type | Select **Standard**. | In this section, you create two virtual machines: **myVM** and **myNVA**. You us ### Create second virtual machine -Follow the previous steps that you used to create **myVM** virtual machine and enter *myNVA* for the virtual machine name. +Follow the previous steps that you used to create **myVM** virtual machine and enter ***myNVA*** for the virtual machine name. ## Test network communication using Network Watcher next hop Use the next hop capability of Network Watcher to determine which route Azure is using to route traffic from **myVM**, which has one network interface with one IP configuration -1. In the search box at the top of the portal, enter *network watcher*. Select **Network Watcher** in the search results. +1. In the search box at the top of the portal, enter ***network watcher***. Select **Network Watcher** in the search results. 1. Under **Network diagnostic tools**, select **Next hop**. Enter or select the following values: Use the next hop capability of Network Watcher to determine which route Azure is | Resource group | Select **myResourceGroup**. | | Virtual machine | Select **myVM**. | | Network interface | Leave the default. |- | Source IP address | Enter *10.0.0.4* or the IP of your VM if it's different. | - | Destination IP address | Enter *13.107.21.200* to test the communication to `www.bing.com`. | + | Source IP address | Enter ***10.0.0.4*** or the IP of your VM if it's different. | + | Destination IP address | Enter ***13.107.21.200*** to test the communication to `www.bing.com`. | 1. Select **Next hop** button to start the test. The test result shows information about the next hop like the next hop type, its IP address, and the route table ID used to route traffic. The result of testing **13.107.21.200** shows that the next hop type is **Internet** and the route table ID is **System Route** which means traffic destined to `www.bing.com` from **myVM** is routed to the internet using Azure default system route. Use the next hop capability of Network Watcher to determine which route Azure is To further analyze routing, review the effective routes for **myVM** network interface. -1. In the search box at the top of the portal, enter *virtual machines*. Select **Virtual machines** in the search results. +1. In the search box at the top of the portal, enter ***virtual machines***. Select **Virtual machines** in the search results. 1. Under **Settings**, select **Networking**, then select the network interface. Next, you create a static custom route to override Azure default system routes a In this section, you create a static custom route (user-defined route) in a route table that forces all traffic destined outside the virtual network to a specific IP address. Forcing traffic to a virtual network appliance is a common scenario. -1. In the search box at the top of the portal, enter *route tables*. Select **Route tables** in the search results. +1. In the search box at the top of the portal, enter ***route tables***. Select **Route tables** in the search results. 1. Select **+ Create** to create a new route table. In the **Create Route table** page, enter or select the following values: In this section, you create a static custom route (user-defined route) in a rout | Resource group | Select **myResourceGroup**. | | **Instance Details** | | | Region | Select **East US**. |- | Name | Enter *myRouteTable*. | + | Name | Enter ***myRouteTable***. | | Propagate gateway routes | Leave the default. | 1. Select **Review + create**. In this section, you create a static custom route (user-defined route) in a rout | Setting | Value | | - | |- | Route name | Enter *myRoute*. | + | Route name | Enter ***myRoute***. | | Address prefix destination | Select **IP Addresses**. |- | Destination IP addresses/CIDR ranges | Enter *0.0.0.0/0*. | + | Destination IP addresses/CIDR ranges | Enter ***0.0.0.0/0***. | | Next hop type | Select **Virtual appliance**. |- | next hop address | Enter *10.0.0.5*. | + | next hop address | Enter ***10.0.0.5***. | 1. Select **Add**. The custom route with prefix 0.0.0.0/0 overrode Azure default route and caused a ## Clean up resources -When no longer needed, delete the resource group and all of the resources it contains: +When no longer needed, delete **myResourceGroup** resource group and all of the resources it contains: ++1. In the search box at the top of the portal, enter ***myResourceGroup***. Select **myResourceGroup** from the search results. ++1. Select **Delete resource group**. -1. Enter *myResourceGroup* in the search box at the top of the portal. When you see **myResourceGroup** in the search results, select it. -2. Select **Delete resource group**. -3. Enter *myResourceGroup* for **TYPE THE RESOURCE GROUP NAME:** and select **Delete**. +1. In **Delete a resource group**, enter ***myResourceGroup***, and then select **Delete**. -## Next steps +1. Select **Delete** to confirm the deletion of the resource group and all its resources. -In this tutorial, you created a virtual machine and used Network Watcher next hop to diagnose routing to different destinations. To learn more about routing in Azure, see [Virtual network traffic routing](../virtual-network/virtual-networks-udr-overview.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json). +## Next step -For outbound VM connections, you can use Network Watcher [connection troubleshoot](network-watcher-connectivity-portal.md) capability to determine the latency, allowed and denied network traffic between the VM and an endpoint, and the route to an endpoint. +To learn how to monitor communication between two virtual machines, advance to the next tutorial: -To learn how to monitor communication between two virtual machines, advance to the next tutorial. > [!div class="nextstepaction"] > [Monitor a network connection](monitor-vm-communication.md) |
network-watcher | Nsg Flow Logs Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/nsg-flow-logs-tutorial.md | + + Title: 'Tutorial: Log network traffic flow to and from a virtual machine' ++description: In this tutorial, you learn how to log network traffic flow to and from a virtual machine (VM) using Network Watcher NSG flow logs capability. ++++ Last updated : 09/26/2023+# CustomerIntent: As an Azure administrator, I need to log the network traffic to and from a virtual machine (VM) so I can analyze the data for anomalies. +++# Tutorial: Log network traffic to and from a virtual machine using the Azure portal ++Network security group flow logging is a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through a network security group. For more information about network security group flow logging, see [NSG flow logs overview](network-watcher-nsg-flow-logging-overview.md). ++This tutorial helps you use NSG flow logs to log a virtual machine's network traffic that flows through the [network security group](../virtual-network/network-security-groups-overview.md) associated to its network interface. +++In this tutorial, you learn how to: ++> [!div class="checklist"] +> * Create a virtual network +> * Create a virtual machine with a network security group associated to its network interface +> * Register Microsoft.insights provider +> * Enable flow logging for a network security group using Network Watcher flow logs +> * Download logged data +> * View logged data ++## Prerequisites ++- An Azure account with an active subscription. If you don't have one, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ++## Create a virtual network ++In this section, you create **myVNet** virtual network with one subnet for the virtual machine. ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. In the search box at the top of the portal, enter ***virtual networks***. Select **Virtual networks** from the search results. ++ :::image type="content" source="./media/nsg-flow-logs-tutorial/virtual-network-azure-portal.png" alt-text="Screenshot shows searching for virtual networks in the Azure portal."::: ++1. Select **+ Create**. In **Create virtual network**, enter or select the following values in the **Basics** tab: ++ | Setting | Value | + | | | + | **Project details** | | + | Subscription | Select your Azure subscription. | + | Resource Group | Select **Create new**. </br> Enter ***myResourceGroup*** in **Name**. </br> Select **OK**. | + | **Instance details** | | + | Name | Enter ***myVNet***. | + | Region | Select **(US) East US**. | ++1. Select **Review + create**. ++1. Review the settings, and then select **Create**. ++## Create a virtual machine ++In this section, you create **myVM** virtual machine. ++1. In the search box at the top of the portal, enter ***virtual machines***. Select **Virtual machines** from the search results. ++1. Select **+ Create** and then select **Azure virtual machine**. ++1. In **Create a virtual machine**, enter or select the following values in the **Basics** tab: ++ | Setting | Value | + | | | + | **Project Details** | | + | Subscription | Select your Azure subscription. | + | Resource Group | Select **myResourceGroup**. | + | **Instance details** | | + | Virtual machine name | Enter ***myVM***. | + | Region | Select **(US) East US**. | + | Availability Options | Select **No infrastructure redundancy required**. | + | Security type | Select **Standard**. | + | Image | Select **Windows Server 2022 Datacenter: Azure Edition - x64 Gen2**. | + | Size | Choose a size or leave the default setting. | + | **Administrator account** | | + | Username | Enter a username. | + | Password | Enter a password. | + | Confirm password | Reenter password. | ++1. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**. ++1. In the Networking tab, select the following values: ++ | Setting | Value | + | | | + | **Network interface** | | + | Virtual network | Select **myVNet**. | + | Subnet | Select **mySubnet**. | + | Public IP | Select **(new) myVM-ip**. | + | NIC network security group | Select **Basic**. This setting creates a network security group named **myVM-nsg** and associates it with the network interface of **myVM** virtual machine. | + | Public inbound ports | Select **Allow selected ports**. | + | Select inbound ports | Select **RDP (3389)**. | ++ > [!CAUTION] + > Leaving the RDP port open to the internet is only recommended for testing. For production environments, it's recommended to restrict access to the RDP port to a specific IP address or range of IP addresses. You can also block internet access to the RDP port and use [Azure Bastion](../bastion/bastion-overview.md) to securely connect to your virtual machine from the Azure portal. ++1. Select **Review + create**. ++1. Review the settings, and then select **Create**. ++1. Once the deployment is complete, select **Go to resource** to go to the **Overview** page of **myVM**. ++1. Select **Connect** then select **RDP**. ++1. Select **Download RDP File** and open the downloaded file. ++1. Select **Connect** and then enter the username and password that you created in the previous steps. Accept the certificate if prompted. ++## Register Insights provider ++NSG flow logging requires the **Microsoft.Insights** provider. To check its status, follow these steps: ++1. In the search box at the top of the portal, enter ***subscriptions***. Select **Subscriptions** in the search results. ++1. Select the Azure subscription that you want to enable the provider for in **Subscriptions**. ++1. Select **Resource providers** under **Settings** of your subscription. ++1. Enter ***insight*** in the filter box. ++1. Confirm the status of the provider displayed is **Registered**. If the status is **NotRegistered**, select the **Microsoft.Insights** provider then select **Register**. ++ :::image type="content" source="./media/nsg-flow-logs-tutorial/register-microsoft-insights.png" alt-text="Screenshot of registering Microsoft Insights provider in the Azure portal."::: ++## Create a storage account ++In this section, you create a storage account to use it to store the flow logs. ++1. In the search box at the top of the portal, enter ***storage accounts***. Select **Storage accounts** in the search results. ++1. Select **+ Create**. In **Create a storage account**, enter or select the following values in the **Basics** tab: ++ | Setting | Value | + | | | + | **Project details** | | + | Subscription | Select your Azure subscription. | + | Resource Group | Select **myResourceGroup**. | + | **Instance details** | | + | Storage account name | Enter a unique name. This tutorial uses **mynwstorageaccount**. | + | Region | Select **(US) East US**. The storage account must be in the same region as the virtual machine and its network security group. | + | Performance | Select **Standard**. NSG flow logs only support Standard-tier storage accounts. | + | Redundancy | Select **Locally-redundant storage (LRS)** or different replication strategy that matches your durability requirements. | ++1. Select the **Review** tab or select the **Review** button at the bottom. ++1. Review the settings, and then select **Create**. ++## Create an NSG flow log ++In this section, you create an NSG flow log that's saved into the storage account created previously in the tutorial. ++1. In the search box at the top of the portal, enter ***network watcher***. Select **Network Watcher** in the search results. ++1. Under **Logs**, select **Flow logs**. ++1. In **Network Watcher | Flow logs**, select **+ Create** or **Create flow log** blue button. ++ :::image type="content" source="./media/nsg-flow-logs-tutorial/flow-logs.png" alt-text="Screenshot of Flow logs page in the Azure portal." lightbox="./media/nsg-flow-logs-tutorial/flow-logs.png"::: ++1. Enter or select the following values in **Create a flow log**: ++ | Setting | Value | + | - | -- | + | **Project details** | | + | Subscription | Select the Azure subscription of your network security group that you want to log. | + | Network security group | Select **+ Select resource**. <br> In **Select network security group**, select **myVM-nsg**. Then, select **Confirm selection**. | + | Flow Log Name | Leave the default of **myVM-nsg-myResourceGroup-flowlog**. | + | **Instance details** | | + | Subscription | Select the Azure subscription of your storage account. | + | Storage Accounts | Select the storage account you created in the previous steps. This tutorial uses **mynwstorageaccount**. | + | Retention (days) | Enter ***0*** to retain the flow logs data in the storage account forever (until you delete it from the storage account). To apply a retention policy, enter the retention time in days. For information about storage pricing, see [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/). | ++ :::image type="content" source="./media/nsg-flow-logs-tutorial/create-nsg-flow-log.png" alt-text="Screenshot of create NSG flow log page in the Azure portal."::: ++ > [!NOTE] + > The Azure portal creates NSG flow logs in the **NetworkWatcherRG** resource group. ++1. Select **Review + create**. ++1. Review the settings, and then select **Create**. ++1. Once the deployment is complete, select **Go to resource** to confirm the flow log created and listed in the **Flow logs** page. ++ :::image type="content" source="./media/nsg-flow-logs-tutorial/flow-logs-list.png" alt-text="Screenshot of Flow logs page in the Azure portal showing the newly created flow log." lightbox="./media/nsg-flow-logs-tutorial/flow-logs-list.png"::: ++1. Go back to your RDP session with **myVM** virtual machine. ++1. Open Microsoft Edge and go to `www.bing.com`. ++## Download the flow log ++In this section, you go to the storage account you previously selected and download the NSG flow log created in the previous section. ++1. In the search box at the top of the portal, enter ***storage accounts***. Select **Storage accounts** in the search results. ++2. Select **mynwstorageaccount** or the storage account you previously created and selected to store the logs. ++3. Under **Data storage**, select **Containers**. ++4. Select the **insights-logs-networksecuritygroupflowevent** container. ++5. In the container, navigate the folder hierarchy until you get to the `PT1H.json` file. NSG log files are written to a folder hierarchy that follows the following naming convention: ++ ``` + https://{storageAccountName}.blob.core.windows.net/insights-logs-networksecuritygroupflowevent/resourceId=/SUBSCRIPTIONS/{subscriptionID}/RESOURCEGROUPS/{resourceGroupName}/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/{networSecurityGroupName}/y={year}/m={month}/d={day}/h={hour}/m=00/macAddress={acAddress}/PT1H.json + ``` ++6. Select the ellipsis **...** to the right of the PT1H.json file, then select **Download**. ++ :::image type="content" source="./media/nsg-flow-logs-tutorial/nsg-log-file.png" alt-text="Screenshot showing how to download nsg flow log from the storage account container in the Azure portal."::: ++> [!NOTE] +> You can use Azure Storage Explorer to access and download flow logs from your storage account. Fore more information, see [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md). ++## View the flow log ++Open the downloaded `PT1H.json` file using a text editor of your choice. The following example is a section taken from the downloaded `PT1H.json` file, which shows a flow processed by the rule **DefaultRule_AllowInternetOutBound**. ++```json +{ + "time": "2023-02-26T23:45:44.1503927Z", + "systemId": "00000000-0000-0000-0000-000000000000", + "macAddress": "112233445566", + "category": "NetworkSecurityGroupFlowEvent", + "resourceId": "/SUBSCRIPTIONS/abcdef01-2345-6789-0abc-def012345678/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/MYVM-NSG", + "operationName": "NetworkSecurityGroupFlowEvents", + "properties": { + "Version": 2, + "flows": [ + { + "rule": "DefaultRule_AllowInternetOutBound", + "flows": [ + { + "mac": "112233445566", + "flowTuples": [ + "1677455097,10.0.0.4,13.107.21.200,49982,443,T,O,A,C,7,1158,12,8143" + ] + } + ] + } + ] + } +} +``` ++The comma-separated information for **flowTuples** is as follows: ++| Example data | What data represents | Explanation | +| | -- | - | +| 1677455097 | Time stamp | The time stamp of when the flow occurred in UNIX EPOCH format. In the previous example, the date converts to February 26, 2023 11:44:57 PM UTC/GMT. | +| 10.0.0.4 | Source IP address | The source IP address that the flow originated from. 10.0.0.4 is the private IP address of the VM you previously created. +| 13.107.21.200 | Destination IP address | The destination IP address that the flow was destined to. 13.107.21.200 is the IP address of `www.bing.com`. Since the traffic is destined outside Azure, the security rule **DefaultRule_AllowInternetOutBound** processed the flow. | +| 49982 | Source port | The source port that the flow originated from. | +| 443 | Destination port | The destination port that the flow was destined to. | +| T | Protocol | The protocol of the flow. T: TCP. | +| O | Direction | The direction of the flow. O: Outbound. | +| A | Decision | The decision made by the security rule. A: Allowed. | +| C | Flow State **Version 2 only** | The state of the flow. C: Continuing for an ongoing flow. | +| 7 | Packets sent **Version 2 only** | The total number of TCP packets sent to destination since the last update. | +| 1158 | Bytes sent **Version 2 only** | The total number of TCP packet bytes sent from source to destination since the last update. Packet bytes include the packet header and payload. | +| 12 | Packets received **Version 2 only** | The total number of TCP packets received from destination since the last update. | +| 8143 | Bytes received **Version 2 only** | The total number of TCP packet bytes received from destination since the last update. Packet bytes include packet header and payload.| ++## Clean up resources ++When no longer needed, delete **myResourceGroup** resource group and all of the resources it contains: ++1. In the search box at the top of the portal, enter ***myResourceGroup***. Select **myResourceGroup** from the search results. ++1. Select **Delete resource group**. ++1. In **Delete a resource group**, enter ***myResourceGroup***, and then select **Delete**. ++1. Select **Delete** to confirm the deletion of the resource group and all its resources. ++> [!NOTE] +> The **myVM-nsg-myResourceGroup-flowlog** flow log is in the **NetworkWatcherRG** resource group, but it'll be deleted after deleting the **myVM-nsg** network security group (by deleting the **myResourceGroup** resource group). ++## Related content ++- To learn more about NSG flow logs, see [Flow logging for network security groups](network-watcher-nsg-flow-logging-overview.md). +- To learn how to create, change, enable, disable, or delete NSG flow logs, see [Manage NSG flow logs](nsg-flow-logging.md). +- To learn about Traffic analytics, see [Traffic analytics overview](traffic-analytics.md). + |
operator-nexus | Howto Kubernetes Cluster Agent Pools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-kubernetes-cluster-agent-pools.md | Before proceeding with this how-to guide, it's recommended that you: ## System pool For a system node pool, Nexus Kubernetes automatically assigns the label `kubernetes.azure.com/mode: system` to its nodes. This label causes Nexus Kubernetes to prefer scheduling system pods on node pools that contain this label. This label doesn't prevent you from scheduling application pods on system node pools. However, we recommend you isolate critical system pods from your application pods to prevent misconfigured or rogue application pods from accidentally killing system pods. -You can enforce this behavior by creating a dedicated system node pool. Use the `CriticalAddonsOnly=true:NoSchedule` taint to prevent application pods from being scheduled on system node pools. +You can enforce this behavior by creating a dedicated system node pool. Use the `CriticalAddonsOnly=true:NoSchedule` taint to prevent application pods from being scheduled on system node pools. If you intend to use the system pool for application pods (not dedicated), do not apply any application specific taints to the pool, as this can cause cluster creation to fail. > [!IMPORTANT] > If you run a single system node pool for your Nexus Kubernetes cluster in a production environment, we recommend you use at least three nodes for the node pool. |
peering-service | Location Partners | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/location-partners.md | The following table provides information on the Peering Service connectivity par | [NAP Africa](https://www.napafrica.net/technical/microsoft-azure-peering-service/) | Africa | | [NTT Communications](https://www.ntt.com/en/services/network/software-defined-network.html) | Japan, Indonesia | | [PCCW](https://www.pccwglobal.com/en/enterprise/products/network/ep-global-internet-access) | Asia |+| [PIT Chile](https://www.pitchile.cl/wp/maps/) |LATAM| | [Sejong Telecom](https://www.sejongtelecom.net/) | Asia | | [Singtel](https://www.singtel.com/business/campaign/singnet-cloud-connect-microsoft-direct) | Asia | | [Swisscom](https://www.swisscom.ch/en/business/enterprise/offer/wireline/ip-plus.html) | Europe | The following table provides information on the Peering Service connectivity par | Mumbai | [DE-CIX](https://www.de-cix.net/services/microsoft-azure-peering-service/) | | New York | [DE-CIX](https://www.de-cix.net/services/microsoft-azure-peering-service/) | | San Jose | [Equinix IX](https://www.equinix.com/interconnection-services/internet-exchange/) |+| Santiago | [PIT Chile] (https://www.pitchile.cl/wp/maps/) | | Seattle | [Equinix IX](https://www.equinix.com/interconnection-services/internet-exchange/) | | Singapore | [Equinix IX](https://www.equinix.com/interconnection-services/internet-exchange/) | |
postgresql | Moved | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/moved.md | Title: Where is Azure Database for PostgreSQL - Hyperscale (Citus) description: Hyperscale (Citus) is now Azure Cosmos DB for PostgreSQL--++ recommendations: false Previously updated : 10/13/2022 Last updated : 09/24/2023 # Azure Database for PostgreSQL - Hyperscale (Citus) is now Azure Cosmos DB for PostgreSQL -Existing Hyperscale (Citus) server groups will automatically become [Azure +Existing Hyperscale (Citus) server groups automatically became [Azure Cosmos DB for PostgreSQL](../../cosmos-db/postgresql/introduction.md) clusters-under the new name, with zero downtime. All features and pricing, including -reserved compute pricing and regional availability, will be preserved under the +under the new name in October 2022. All features and pricing, including +reserved compute pricing and regional availability, were preserved under the new name. -Once the name change is complete, all Hyperscale (Citus) information such as -product overview, pricing information, documentation, and more will be moved -under the Azure Cosmos DB sections in the Azure portal. --> [!NOTE] -> -> The name change in the Azure portal for existing Hyperscale (Citus) customers -> will happen at the end of October. During this process, the cluster may -> temporarily disappear in the Azure portal in both Hyperscale (Citus) and -> Azure Cosmos DB. There will be no service downtime for users of the database, -> only a possible interruption in the portal administrative interface. - ## Find your cluster in the renamed service View the list of Azure Cosmos DB for PostgreSQL clusters in your subscription. # [Direct link](#tab/direct) -Go to the [list of Azure Cosmos DB for PostgreSQL clusters](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.DocumentDb%2FdatabaseAccounts) in the Azure portal. +Go to the [list of Azure Cosmos DB for PostgreSQL clusters](https://portal.azure.com/#browse/Microsoft.DBforPostgreSQL%2FserverGroupsv2) in the Azure portal. # [Portal search](#tab/portal-search) -In the [Azure portal](https://portal.azure.com), search for `cosmosdb` and -select **Azure Cosmos DB** from the results. -+In the [Azure portal](https://portal.azure.com), search for `postgresql` and +select **Azure Cosmos DB for PostgreSQL Cluster** from the results. -Your cluster will appear in this list. Once it's listed in Azure Cosmos DB, it -will no longer appear as an Azure Database for PostgreSQL server group. +Your cluster will appear in this list. ## Next steps |
postgresql | Concepts Single To Flexible | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/concepts-single-to-flexible.md | Along with data migration, the tool automatically provides the following built-i - Migration of permissions of database objects on your source server such as GRANTS/REVOKES to the target server. > [!NOTE] -> This functionality is enabled only for flexible servers in **North Europe** region. It will be enabled for flexible servers in other Azure regions soon. In the meantime, you can follow the steps mentioned in this [doc](../single-server/how-to-upgrade-using-dump-and-restore.md#migrate-the-roles) to perform user/roles migration +> This functionality is enabled only for flexible servers in **Central US**, **Canada Central**, **France Central**, **Japan East** and **Australia East** regions. It will be enabled for flexible servers in other Azure regions soon. In the meantime, you can follow the steps mentioned in this [doc](../single-server/how-to-upgrade-using-dump-and-restore.md#migrate-the-roles) to perform user/roles migration ## Limitations |
private-link | Private Endpoint Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-overview.md | As you're creating private endpoints, consider the following: ## Private-link resource A private-link resource is the destination target of a specified private endpoint. The following table lists the available resources that support a private endpoint: -| Private-link resource name | Resource type | Subresources | +| Private-link resource name | Resource type | Sub-resources | | | - | - |+| Application Gateway | Microsoft.Network/applicationgateways | application gateway | +| Azure AI services | Microsoft.CognitiveServices/accounts | account | +| Azure API for FHIR (Fast Healthcare Interoperability Resources) | Microsoft.HealthcareApis/services | fhir | | Azure App Configuration | Microsoft.Appconfiguration/configurationStores | configurationStores |+| Azure App Service | Microsoft.Web/hostingEnvironments | hosting environment | +| Azure App Service | Microsoft.Web/sites | sites | | Azure Automation | Microsoft.Automation/automationAccounts | Webhook, DSCAndHybridWorker |-| Azure Cosmos DB | Microsoft.AzureCosmosDB/databaseAccounts | SQL, MongoDB, Cassandra, Gremlin, Table | -| Azure Cosmos DB for PostgreSQL | Microsoft.DBforPostgreSQL/serverGroupsv2 | coordinator | +| Azure Backup | Microsoft.RecoveryServices/vaults | AzureBackup, AzureSiteRecovery | | Azure Batch | Microsoft.Batch/batchAccounts | batchAccount, nodeManagement | | Azure Cache for Redis | Microsoft.Cache/Redis | redisCache | | Azure Cache for Redis Enterprise | Microsoft.Cache/redisEnterprise | redisEnterprise |-| Azure AI services | Microsoft.CognitiveServices/accounts | account | -| Azure Managed Disks | Microsoft.Compute/diskAccesses | managed disk | +| Azure Cognitive Search | Microsoft.Search/searchServices | searchService | | Azure Container Registry | Microsoft.ContainerRegistry/registries | registry |-| Azure Kubernetes Service - Kubernetes API | Microsoft.ContainerService/managedClusters | management | -| Azure Data Factory | Microsoft.DataFactory/factories | dataFactory | +| Azure Cosmos DB | Microsoft.AzureCosmosDB/databaseAccounts | SQL, MongoDB, Cassandra, Gremlin, Table | +| Azure Cosmos DB for PostgreSQL | Microsoft.DBforPostgreSQL/serverGroupsv2 | coordinator | | Azure Data Explorer | Microsoft.Kusto/clusters | cluster |+| Azure Data Factory | Microsoft.DataFactory/factories | dataFactory | | Azure Database for MariaDB | Microsoft.DBforMariaDB/servers | mariadbServer | | Azure Database for MySQL - Single Server | Microsoft.DBforMySQL/servers | mysqlServer | | Azure Database for MySQL- Flexible Server | Microsoft.DBforMySQL/flexibleServers | mysqlServer | | Azure Database for PostgreSQL - Single server | Microsoft.DBforPostgreSQL/servers | postgresqlServer |+| Azure Databricks | Microsoft.Databricks/workspaces | databricks_ui_api, browser_authentication | | Azure Device Provisioning Service | Microsoft.Devices/provisioningServices | iotDps |-| Azure IoT Hub | Microsoft.Devices/IotHubs | iotHub | -| Azure IoT Central | Microsoft.IoTCentral/IoTApps | IoTApps | | Azure Digital Twins | Microsoft.DigitalTwins/digitalTwinsInstances | API | | Azure Event Grid | Microsoft.EventGrid/domains | domain | | Azure Event Grid | Microsoft.EventGrid/topics | topic | | Azure Event Hub | Microsoft.EventHub/namespaces | namespace |+| Azure File Sync | Microsoft.StorageSync/storageSyncServices | File Sync Service | | Azure HDInsight | Microsoft.HDInsight/clusters | cluster |-| Azure API for FHIR (Fast Healthcare Interoperability Resources) | Microsoft.HealthcareApis/services | fhir | -| Azure Key Vault HSM (hardware security module) | Microsoft.Keyvault/managedHSMs | HSM | +| Azure IoT Central | Microsoft.IoTCentral/IoTApps | IoTApps | +| Azure IoT Hub | Microsoft.Devices/IotHubs | iotHub | | Azure Key Vault | Microsoft.KeyVault/vaults | vault |-| Azure Machine Learning | Microsoft.MachineLearningServices/workspaces | amlworkspace | +| Azure Key Vault HSM (hardware security module) | Microsoft.Keyvault/managedHSMs | HSM | +| Azure Kubernetes Service - Kubernetes API | Microsoft.ContainerService/managedClusters | management | | Azure Machine Learning | Microsoft.MachineLearningServices/registries | amlregistry |+| Azure Machine Learning | Microsoft.MachineLearningServices/workspaces | amlworkspace | +| Azure Managed Disks | Microsoft.Compute/diskAccesses | managed disk | +| Azure Media Services | Microsoft.Media/mediaservices | keydelivery, liveevent, streamingendpoint | | Azure Migrate | Microsoft.Migrate/assessmentProjects | project |-| Application Gateway | Microsoft.Network/applicationgateways | application gateway | -| Private Link service (your own service) | Microsoft.Network/privateLinkServices | empty | -| Power BI | Microsoft.PowerBI/privateLinkServicesForPowerBI | Power BI | -| Microsoft Purview | Microsoft.Purview/accounts | account | -| Microsoft Purview | Microsoft.Purview/accounts | portal | -| Azure Backup | Microsoft.RecoveryServices/vaults | AzureBackup, AzureSiteRecovery | +| Azure Monitor Private Link Scope | Microsoft.Insights/privatelinkscopes | azuremonitor | | Azure Relay | Microsoft.Relay/namespaces | namespace |-| Azure Cognitive Search | Microsoft.Search/searchServices | searchService | | Azure Service Bus | Microsoft.ServiceBus/namespaces | namespace | | Azure SignalR Service | Microsoft.SignalRService/SignalR | signalr | | Azure SignalR Service | Microsoft.SignalRService/webPubSub | webpubsub | | Azure SQL Database | Microsoft.Sql/servers | SQL Server (sqlServer) | | Azure SQL Managed Instance | Microsoft.Sql/managedInstances | managedInstance |+| Azure Static Web Apps | Microsoft.Web/staticSites | staticSites | | Azure Storage | Microsoft.Storage/storageAccounts | Blob (blob, blob_secondary)<BR> Table (table, table_secondary)<BR> Queue (queue, queue_secondary)<BR> File (file, file_secondary)<BR> Web (web, web_secondary)<BR> Dfs (dfs, dfs_secondary) |-| Azure File Sync | Microsoft.StorageSync/storageSyncServices | File Sync Service | | Azure Synapse | Microsoft.Synapse/privateLinkHubs | web |-| Azure Synapse Analytics | Microsoft.Synapse/workspaces | Sql, SqlOnDemand, Dev | -| Azure App Service | Microsoft.Web/hostingEnvironments | hosting environment | -| Azure App Service | Microsoft.Web/sites | sites | -| Azure Static Web Apps | Microsoft.Web/staticSites | staticSites | -| Azure Media Services | Microsoft.Media/mediaservices | keydelivery, liveevent, streamingendpoint | +| Azure Synapse Analytics | Microsoft.Synapse/workspaces | Sql, SqlOnDemand, Dev | +| Azure Virtual Desktop - host pools | Microsoft.DesktopVirtualization/hostpools | connection | +| Azure Virtual Desktop - workspaces | Microsoft.DesktopVirtualization/workspaces | feed<br />global | +| Microsoft Purview | Microsoft.Purview/accounts | account | +| Microsoft Purview | Microsoft.Purview/accounts | portal | +| Power BI | Microsoft.PowerBI/privateLinkServicesForPowerBI | Power BI | +| Private Link service (your own service) | Microsoft.Network/privateLinkServices | empty | | Resource Management Private Links | Microsoft.Authorization/resourceManagementPrivateLinks | ResourceManagement |-| Azure Databricks | Microsoft.Databricks/workspaces | databricks_ui_api, browser_authentication | -| Azure Monitor Private Link Scope | Microsoft.Insights/privatelinkscopes | azuremonitor | > [!NOTE] > You can create private endpoints only on a General Purpose v2 (GPv2) storage account. |
reliability | Migrate Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-vm.md | description: Learn how to migrate your Azure Virtual Machines and Virtual Machin Previously updated : 04/21/2022 Last updated : 09/21/2023 Now that you have migrated your data to ZRS managed disks or zonal managed disks ``` +## Migration Option 2: VM regional to zonal move -## Migration Option 2: Azure Resource Mover +This section details how to move single instance Azure virtual machines from a Regional configuration to a target [Availability Zone](../reliability/availability-zones-overview.md) within the same Azure region. +++> [!IMPORTANT] +> Regional to zonal move of single instance VM(s) configuration is currently in *Public Preview*. ++### Key benefits of regional to zonal move ++The benefits of a regional to zonal move are: ++- **Enhanced user experience**- The new availability zones in the desired region lowers the latency and builds a good customer experience. +- **Reduced downtime**- The virtual machines are supported throughout, thereby improving the application resiliency and availability. +- **Network connectivity**ΓÇô Leverages the existing infrastructure, such as virtual networks (VNETs), subnets, network security groups (NSGs), and load balancers (LBs), which can support the target Zonal configuration. +- **High scalability**- Orchestrates the move at scale by reducing manual touch points and minimizes the overall migration time from days to hours or even minutes, depending on the volume of data. +++### Components ++The following components are used during a regional to zonal move: ++| Component | Details | +| | | +| Move collection | A move collection is an Azure Resource Manager object that is created during the regional to zonal move process. The collection is based on the VM's region and subscription parameters and contains metadata and configuration information about the resources you want to move. VMs added to a move collection must be in the same subscription and region/location but can be selected from different resource groups.| +| Move resource | When you add VM(s) to a move collection, it's tracked as a move resource and this information is maintained in the move collection for each of the VM(s) that are currently in the move process. The move collection will be created in a temporary resource group in your subscription and can be deleted along with the resource group if desired. | +| Dependencies | When you add VMs to the move collection, validation checks are done to determine if the VMs have any dependencies that aren't in the move collection. For example, a network interface card (NIC) is a dependent resource for a VM and must be moved along with the VM. After identifying the dependencies for each of the VMs, you can either add dependencies to the move collection and move them as well, or you can select alternate existing resources in the target zonal configuration. You can select an existing VNET in the target zonal configuration or create a new VNET as applicable. | ++### Support matrix + +##### **Virtual Machines compute** ++The following table describes the support matrix for moving virtual machines from a regional to zonal configuration: ++| Scenario | Support | Details | +| | | | +| Single Instance VM | Supported | Regional to zonal move of single instance VM(s) is supported. | +| VMs within an Availability Set | Not supported | | +| VMs inside Virtual Machine Scale Sets with uniform orchestration | Not supported | | +| VMs inside Virtual Machine Scale Sets with flexible orchestration | Not supported | | +| Supported regions | Supported | Only availability zone supported regions are supported. Learn [more](../reliability/availability-zones-service-support.md) to learn about the region details. | +| VMs already located in an availability zone | Not supported | Cross-zone move isn't supported. Only VMs that are within the same region can be moved to another availability zone. | +| VM extensions | Not Supported | VM move is supported, but extensions aren't copied to target zonal VM. | +| VMs with trusted launch | Supported | Re-enable the **Integrity Monitoring** option in the portal and save the configuration after the move. | +| Confidential VMs | Supported | Re-enable the **Integrity Monitoring** option in the portal and save the configuration after the move. | +| Generation 2 VMs (UEFI boot) | Supported | | +| VMs in Proximity placement groups | Supported | Source proximity placement group (PPG) is not retained in the zonal configuration. | +| Spot VMs (Low priority VMs) | Supported | | +| VMs with dedicated hosts | Supported | Source VM dedicated host won't be preserved. | +| VMs with Host caching enabled | Supported | | +| VMs created from marketplace images | Supported | | +| VMs created from custom images | Supported | | +| VM with HUB (Hybrid Use Benefit) license | Supported | | +| VM RBAC policies | Not Supported | VM move is supported, but RBACs aren't copied to target zonal VM. | +| VMs using Accelerated Networking | Supported | | ++#### **Virtual Machines storage settings** ++The following table describes the support matrix for moving virtual machines storage settings: ++| Scenario | Support | Details | +| | | | +| VMs with managed disk | Supported | Regional to zonal move of single instance VM(s) is supported. | +| VMs using unmanaged disks | Not supported | | +| VMs using Ultra Disks | Not supported | | +| VMs using Ephemeral OS Disks | Not supported | | +| VMs using shared disks | Not supported | | +| VMs with standard HDDs | Supported | | +| VMs with standard SSDs | Supported | | +| VMs with premium SSDs | Supported | | +| VMs with NVMe disks (Storage optimized - Lsv2-series) | Supported | | +| Temporary disk in VMs | Supported | Temporary disks will be created; however, they won't contain the data from the source temporary disks. | +| VMs with ZRS disks | Supported | | +| VMs with ADE (Azure Disk Encryption) | Supported | | +| VMs with server-side encryption using service-managed keys | Supported | | +| VMs with server-side encryption using customer-managed keys | Supported | | +| VMs with Host based encryption enabled with PM | Not Supported | | +| VMs with Host based encryption enabled with CMK | Not Supported | | +| VMs with Host based encryption enabled with Double encryption | Not Supported | | ++#### **Virtual Machines networking settings** ++The following table describes the support matrix for moving virtual machines networking settings: ++| Scenario | Support | Details | +| | | --| +| NIC | Supported | By default, a new resource is created, however, you can specify an existing resource in the target configuration. | +| VNET | Supported| By default, the source virtual network (VNET) is used, or you can specify an existing resource in the target configuration. | +++## Migration Option 3: Azure Resource Mover ### When to use Azure Resource Mover The following requirements should be part of a disaster recovery strategy that h - [Azure services and regions that support availability zones](availability-zones-service-support.md) - [Reliability in Virtual Machines](./reliability-virtual-machines.md) - [Reliability in Virtual Machine Scale Sets](./reliability-virtual-machine-scale-sets.md)+- [Move single instance Azure VMs from regional to zonal configuration using PowerShell](../virtual-machines/move-virtual-machines-regional-zonal-powershell.md) +- [Move single instance Azure VMs from regional to zonal configuration via portal](../virtual-machines/move-virtual-machines-regional-zonal-portal.md) |
reliability | Reliability App Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-app-service.md | Azure App Service can be deployed across [availability zones (AZ)](../reliabilit When you configure to be zone redundant, the platform automatically spreads the instances of the Azure App Service plan across three zones in the selected region. This means that the minimum App Service Plan instance count will always be three. If you specify a capacity larger than three, and the number of instances is divisible by three, the instances are spread evenly. Otherwise, instance counts beyond 3*N are spread across the remaining one or two zones. -Availability zone support is a property of the App Service plan. App Service plans can be created on managed multi-tenant environment or dedicated environment using App Service Environment. To Learn more regarding App Service Environment, see [App Service Environment v3 overview](../app-service/environment/overview.md). +Availability zone support is a property of the App Service plan. App Service plans can be created on managed multi-tenant environment or dedicated environment using App Service Environment v3. To Learn more regarding App Service Environment v3, see [App Service Environment v3 overview](../app-service/environment/overview.md). For App Services that aren't configured to be zone redundant, VM instances are not zone resilient and can experience downtime during an outage in any zone in that region. The current requirements/limitations for enabling availability zones are: - Both Windows and Linux are supported. +- Availability zones are only supported on the newer App Service footprint. Even if you're using one of the supported regions, you'll receive an error if availability zones aren't supported for your resource group. To ensure your workloads land on a stamp that supports availability zones, you may need to create a new resource group, App Service plan, and App Service. + - Your App Services plan must be one of the following plans that support availability zones: - In a multi-tenant environment using App Service Premium v2 or Premium v3 plans. The Azure Resource Manager template snippet below shows the new ***zoneRedundant To learn how to create an App Service Environment v3 on the Isolated v2 plan, see [Create an App Service Environment](../app-service/environment/creation.md). +#### Troubleshooting ++|Error message |Description |Recommendation | +|||-| +|Zone redundancy is not available for resource group 'RG-NAME'. Please deploy app service plan 'ASP-NAME' to a new resource group. |Availability zones are only supported on the newer App Service footprint. Even if you're using one of the supported regions, you'll receive an error if availability zones aren't supported for your resource group. |To ensure your workloads land on a stamp that supports availability zones, create a new resource group, App Service plan, and App Service. | + ### Fault tolerance To prepare for availability zone failure, you should over-provision capacity of service to ensure that the solution can tolerate 1/3 loss of capacity and continue to function without degraded performance during zone-wide outages. Since the platform spreads VMs across three zones and you need to account for at least the failure of one zone, multiply peak workload instance count by a factor of zones/(zones-1), or 3/2. For example, if your typical peak workload requires four instances, you should provision six instances: (2/3 * 6 instances) = 4 instances. |
sap | Deploy Control Plane | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/deploy-control-plane.md | Optionally, assign the following permissions to the service principal: az role assignment create --assignee <appId> --role "User Access Administrator" --scope /subscriptions/<subscriptionID>/resourceGroups/<resourceGroupName> ``` -## Prepare the web app -This step is optional. If you want a browser-based UX to help the configuration of SAP workload zones and systems, run the following commands before you deploy the control plane. -# [Linux](#tab/linux) +## Deploy the control plane -```bash -echo '[{"resourceAppId":"00000003-0000-0000-c000-000000000000","resourceAccess":[{"id":"e1fe6dd8-ba31-4d61-89e7-88639da4683d","type":"Scope"}]}]' >> manifest.json +All the artifacts that are required to deploy the control plane are located in GitHub repositories. -region_code=WEEU +Prepare for the control plane deployment by cloning the repositories using the following commands: -export TF_VAR_app_registration_app_id=$(az ad app create \ - --display-name ${region_code}-webapp-registration \ - --enable-id-token-issuance true \ - --sign-in-audience AzureADMyOrg \ - --required-resource-access @manifest.json \ - --query "appId" | tr -d '"') -export TF_VAR_webapp_client_secret=$(az ad app credential reset \ - --id $TF_VAR_app_registration_app_id --append \ - --query "password" | tr -d '"') +```bash +mkdir -p ~/Azure_SAP_Automated_Deployment; cd $_ -export TF_VAR_use_webapp=true -rm manifest.json +git clone https://github.com/Azure/sap-automation.git sap-automation ++git clone https://github.com/Azure/sap-automation-samples.git samples ```-# [Windows](#tab/windows) -```powershell +The sample deployer configuration file `MGMT-WEEU-DEP00-INFRASTRUCTURE.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/samples/Terraform/WORKSPACES/DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE` folder. -Add-Content -Path manifest.json -Value '[{"resourceAppId":"00000003-0000-0000-c000-000000000000","resourceAccess":[{"id":"e1fe6dd8-ba31-4d61-89e7-88639da4683d","type":"Scope"}]}]' +The sample SAP library configuration file `MGMT-WEEU-SAP_LIBRARY.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/samples/Terraform/WORKSPACES/LIBRARY/MGMT-WEEU-SAP_LIBRARY` folder. -$region_code="WEEU" +You can copy the sample configuration files to start testing the deployment automation framework. -$env:TF_VAR_app_registration_app_id = (az ad app create ` - --display-name $region_code-webapp-registration ` - --required-resource-accesses ./manifest.json ` - --query "appId").Replace('"',"") +A minimal Terraform file for the `DEPLOYER` might look like this example: -$env:TF_VAR_webapp_client_secret=(az ad app credential reset ` - --id $env:TF_VAR_app_registration_app_id --append ` - --query "password").Replace('"',"") +```terraform +# The environment value is a mandatory field, it is used for partitioning the environments. +environment = "MGMT" +# The location/region value is a mandatory field, it is used to control where the resources are deployed +location = "westeurope" -$env:TF_VAR_use_webapp="true" +# management_network_address_space is the address space for management virtual network +management_network_address_space = "10.10.20.0/25" +# management_subnet_address_prefix is the address prefix for the management subnet +management_subnet_address_prefix = "10.10.20.64/28" -del manifest.json +# management_firewall_subnet_address_prefix is the address prefix for the firewall subnet +management_firewall_subnet_address_prefix = "10.10.20.0/26" +firewall_deployment = false ++# management_bastion_subnet_address_prefix is the address prefix for the bastion subnet +management_bastion_subnet_address_prefix = "10.10.20.128/26" +bastion_deployment = true ++# deployer_enable_public_ip controls if the deployer Virtual machines will have Public IPs +deployer_enable_public_ip = false ++# deployer_count defines how many deployer VMs will be deployed +deployer_count = 1 ++# use_service_endpoint defines that the management subnets have service endpoints enabled +use_service_endpoint = true ++# use_private_endpoint defines that the storage accounts and key vaults have private endpoints enabled +use_private_endpoint = false ++# enable_firewall_for_keyvaults_and_storage defines that the storage accounts and key vaults have firewall enabled +enable_firewall_for_keyvaults_and_storage = false ++# public_network_access_enabled controls if storage account and key vaults have public network access enabled +public_network_access_enabled = true ``` -# [Azure DevOps](#tab/devops) +Note the Terraform variable file locations for future edits during deployment. -Currently, it isn't possible to perform this action from Azure DevOps. +A minimal Terraform file for the `LIBRARY` might look like this example: -+```terraform +# The environment value is a mandatory field, it is used for partitioning the environments, for example, PROD and NP. +environment = "MGMT" +# The location/region value is a mandatory field, it is used to control where the resources are deployed +location = "westeurope" -## Deploy the control plane +#Defines the DNS suffix for the resources +dns_label = "azure.contoso.net" -The sample deployer configuration file `MGMT-WEEU-DEP00-INFRASTRUCTURE.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/samples/Terraform/WORKSPACES/DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE` folder. +# use_private_endpoint defines that the storage accounts and key vaults have private endpoints enabled +use_private_endpoint = false +``` -The sample SAP library configuration file `MGMT-WEEU-SAP_LIBRARY.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/samples/Terraform/WORKSPACES/LIBRARY/MGMT-WEEU-SAP_LIBRARY` folder. -Run the following command to create the deployer and the SAP library. The command adds the service principal details to the deployment key vault. If you followed the web app setup in the previous step, this command also creates the infrastructure to host the application. +Note the Terraform variable file locations for future edits during deployment. ++Run the following command to create the deployer and the SAP library. The command adds the service principal details to the deployment key vault. # [Linux](#tab/linux) -You can copy the sample configuration files to start testing the deployment automation framework. Run the following command to deploy the control plane: export ARM_CLIENT_SECRET="<password>" export ARM_TENANT_ID="<tenantId>" export env_code="MGMT" export region_code="WEEU"-export vnet_code="DEP01" +export vnet_code="DEP00" export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation" export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES" export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-auto az logout az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}" --tenant "${ARM_TENANT_ID}" - cd ~/Azure_SAP_Automated_Deployment/WORKSPACES -sudo ${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh \ - --deployer_parameter_file "${CONFIG_REPO_PATH}/DEPLOYER/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars" \ - --library_parameter_file "${CONFIG_REPO_PATH}/LIBRARY/${env_code}-${region_code}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars" \ - --subscription "${ARM_SUBSCRIPTION_ID}" \ - --spn_id "${ARM_CLIENT_ID}" \ - --spn_secret "${ARM_CLIENT_SECRET}" \ +deployer_parameter_file="${CONFIG_REPO_PATH}/DEPLOYER/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars" +library_parameter_file="${CONFIG_REPO_PATH}/LIBRARY/${env_code}-${region_code}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars" ++${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh \ + --deployer_parameter_file "${deployer_parameter_file}" \ + --library_parameter_file "{library_parameter_file}" \ + --subscription "${ARM_SUBSCRIPTION_ID}" \ + --spn_id "${ARM_CLIENT_ID}" \ + --spn_secret "${ARM_CLIENT_SECRET}" \ --tenant_id "${ARM_TENANT_ID}" ``` You can track the progress in the Azure DevOps portal. After the deployment is f -### Manually configure the deployer by using Azure Bastion +### Manually configure a virtual machine as a SDAF deployer by using Azure Bastion To connect to the deployer: cd sap-automation/deploy/scripts The script installs Terraform and Ansible and configures the deployer. -### Manually configure the deployer +### Manually configure a virtual machine as a SDAF deployer Connect to the deployer VM from a computer that can reach the Azure virtual network. cd sap-automation/deploy/scripts The script installs Terraform and Ansible and configures the deployer. +## Securing the control plane ++The control plane is the most critical part of the SAP automation framework. It's important to secure the control plane. The following steps help you secure the control plane. +If you have created your control plane using an external virtual machine or by using the cloud shell, you should secure the control plane by implementing private endpoints for the storage accounts and key vaults. ++Log on to the deployer virtual machine and copy the control plane configuration `tfvars` terraform files to the deployer. Ensure that the files are located in the `~/Azure_SAP_Automated_Deployment/WORKSPACES` DEPLOYER and LIBRARY folders. ++Ensure that the `use_private_endpoint` variable is set to `true` in the `DEPLOYER` and `LIBRARY` configuration files. Also ensure that `public_network_access_enabled` is set to `false` in the `DEPLOYER` configuration files. ++```terraform ++# use_private_endpoint defines that the storage accounts and key vaults have private endpoints enabled +use_private_endpoint = true ++# public_network_access_enabled controls if storage account and key vaults have public network access enabled +public_network_access_enabled = false ++``` ++Rerun the control plane deployment to enable private endpoints for the storage accounts and key vaults. ++```bash ++export ARM_SUBSCRIPTION_ID="<subscriptionId>" +export ARM_CLIENT_ID="<appId>" +export ARM_CLIENT_SECRET="<password>" +export ARM_TENANT_ID="<tenantId>" +export env_code="MGMT" +export region_code="WEEU" +export vnet_code="DEP00" +export storageaccountname=<storageaccountname> ++export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation" +export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES" +export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation" ++az logout +az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}" --tenant "${ARM_TENANT_ID}" ++cd ~/Azure_SAP_Automated_Deployment/WORKSPACES ++deployer_parameter_file="${CONFIG_REPO_PATH}/DEPLOYER/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars" +library_parameter_file="${CONFIG_REPO_PATH}/LIBRARY/${env_code}-${region_code}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars" ++${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh \ + --deployer_parameter_file "${deployer_parameter_file}" \ + --library_parameter_file "{library_parameter_file}" \ + --subscription "${ARM_SUBSCRIPTION_ID}" \ + --spn_id "${ARM_CLIENT_ID}" \ + --spn_secret "${ARM_CLIENT_SECRET}" \ + --tenant_id "${ARM_TENANT_ID}" \ + --storageaccountname "${storageaccountname}" \ + --recover +``` +++## Prepare the web app +This step is optional. If you want a browser-based UX to help the configuration of SAP workload zones and systems, run the following commands before you deploy the control plane. ++# [Linux](#tab/linux) ++```bash +echo '[{"resourceAppId":"00000003-0000-0000-c000-000000000000","resourceAccess":[{"id":"e1fe6dd8-ba31-4d61-89e7-88639da4683d","type":"Scope"}]}]' >> manifest.json ++region_code=WEEU ++export TF_VAR_app_registration_app_id=$(az ad app create \ + --display-name ${region_code}-webapp-registration \ + --enable-id-token-issuance true \ + --sign-in-audience AzureADMyOrg \ + --required-resource-access @manifest.json \ + --query "appId" | tr -d '"') ++export TF_VAR_webapp_client_secret=$(az ad app credential reset \ + --id $TF_VAR_app_registration_app_id --append \ + --query "password" | tr -d '"') ++export TF_VAR_use_webapp=true +rm manifest.json ++``` +# [Windows](#tab/windows) ++```powershell ++Add-Content -Path manifest.json -Value '[{"resourceAppId":"00000003-0000-0000-c000-000000000000","resourceAccess":[{"id":"e1fe6dd8-ba31-4d61-89e7-88639da4683d","type":"Scope"}]}]' ++$region_code="WEEU" ++$env:TF_VAR_app_registration_app_id = (az ad app create ` + --display-name $region_code-webapp-registration ` + --required-resource-accesses ./manifest.json ` + --query "appId").Replace('"',"") ++$env:TF_VAR_webapp_client_secret=(az ad app credential reset ` + --id $env:TF_VAR_app_registration_app_id --append ` + --query "password").Replace('"',"") ++$env:TF_VAR_use_webapp="true" ++del manifest.json ++``` ++# [Azure DevOps](#tab/devops) ++Currently, it isn't possible to perform this action from Azure DevOps. +++ ## Next step > [!div class="nextstepaction"] |
sap | Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/tutorial.md | When you choose a name for your service principal, make sure that the name is un 1. Give the service principal Contributor and User Access Administrator permissions. ```cloudshell-interactive- export subscriptionId="<subscriptionId>" + export ARM_SUBSCRIPTION_ID="<subscriptionId>" export control_plane_env_code="MGMT" az ad sp create-for-rbac --role="Contributor" \- --scopes="/subscriptions/${subscriptionId}" \ + --scopes="/subscriptions/${ARM_SUBSCRIPTION_ID}" \ --name="${control_plane_env_code}-Deployment-Account" ``` When you choose a name for your service principal, make sure that the name is un az role assignment create --assignee ${appId} \ --role "User Access Administrator" \- --scope /subscriptions/${subscriptionId} + --scope /subscriptions/${ARM_SUBSCRIPTION_ID} ``` If you don't assign the User Access Administrator role to the service principal, you can't assign permissions by using the automation. The sample SAP library configuration file `MGMT-NOEU-SAP_LIBRARY.tfvars` is in t ```bash - export subscriptionId="<subscriptionId>" - export spn_id="<appId>" - export spn_secret="<password>" - export tenant_id="<tenantId>" - export env_code="MGMT" - export vnet_code="DEP00" - export region_code="<region_code>" + export ARM_SUBSCRIPTION_ID="<subscriptionId>" + export ARM_CLIENT_ID="<appID>" + export ARM_CLIENT_SECRET="<password>" + export ARM_TENANT_ID="<tenant>" + export env_code="MGMT" + export vnet_code="DEP00" + export region_code="<region_code>" export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation" export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"- export ARM_SUBSCRIPTION_ID="${subscriptionId}" + cd $CONFIG_REPO_PATH ${DEPLOYMENT_REPO_PATH}/deploy/scripts/deploy_controlplane.sh \ --deployer_parameter_file DEPLOYER/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars \ --library_parameter_file LIBRARY/${env_code}-${region_code}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars \- --subscription "${subscriptionId}" \ - --spn_id "${spn_id}" \ - --spn_secret "${spn_secret}" \ - --tenant_id "${tenant_id}" \ + --subscription "${ARM_SUBSCRIPTION_ID}" \ + --spn_id "${ARM_CLIENT_ID}" \ + --spn_secret "${ARM_CLIENT_SECRET}" \ + --tenant_id "${ARM_TENANT_ID}" \ --auto-approve ``` For this example configuration, the resource group is `MGMT-NOEU-DEP00-INFRASTRU ```yaml - bom_base_name: S41909SPS03_v0010ms + bom_base_name: S4HANA_2021_FP01_v0001ms ``` For this example configuration, the resource group is `MGMT-NOEU-DEP00-INFRASTRU ```yaml - bom_base_name: S41909SPS03_v0010ms + bom_base_name: S4HANA_2021_FP01_v0001ms kv_name: <Deployer KeyVault Name> ``` For this example configuration, the resource group is `MGMT-NOEU-DEP00-INFRASTRU ```yaml - bom_base_name: S41909SPS03_v0010 + bom_base_name: S4HANA_2021_FP01_v0001ms kv_name: <Deployer KeyVault Name>- check_storage_account: false + BOM_directory: ${HOME}/Azure_SAP_Automated_Deployment/samples/SAP ``` Deploy the SAP system. export sap_env_code="DEV" export region_code="<region_code>" export vnet_code="SAP01"+export SID="X00" export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES" export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation" -cd ${CONFIG_REPO_PATH}/SYSTEM/${sap_env_code}-${region_code}-${vnet_code}-X00 +cd ${CONFIG_REPO_PATH}/SYSTEM/${sap_env_code}-${region_code}-${vnet_code}-${SID} ${DEPLOYMENT_REPO_PATH}/deploy/scripts/installer.sh \- --parameterfile "${sap_env_code}-${region_code}-${vnet_code}-X00.tfvars" \ + --parameterfile "${sap_env_code}-${region_code}-${vnet_code}-${SID}.tfvars" \ --type sap_system ``` The deployment command for the `northeurope` example looks like: cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/DEV-NOEU-SAP01-X00 ${DEPLOYMENT_REPO_PATH}/deploy/scripts/installer.sh \- --parameterfile DEV-NOEU-SAP01-X00.tfvars \ - --type sap_system \ + --parameterfile DEV-NOEU-SAP01-X00.tfvars \ + --type sap_system \ --auto-approve ``` Before you begin, sign in to your Azure account. Then, check that you're in the Go to the `DEV-NOEU-SAP01-X00` subfolder inside the `SYSTEM` folder. Then, run this command: ```bash-export sap_env_code="DEV" -export region_code="NOEU" -export vnet_code="SAP01" +export sap_env_code="DEV" +export region_code="NOEU" +export sap_vnet_code="SAP02" -cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/${sap_env_code}-${region_code}-${vnet_code}-X00 +cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/${sap_env_code}-${region_code}-${sap_vnet_code}-X00 ${DEPLOYMENT_REPO_PATH}/deploy/scripts/remover.sh \- --parameterfile "${sap_env_code}-${region_code}-${vnet_code}-X00.tfvars" \ + --parameterfile "${sap_env_code}-${region_code}-${sap_vnet_code}-X00.tfvars" \ --type sap_system ``` Go to the `DEV-XXXX-SAP01-INFRASTRUCTURE` subfolder inside the `LANDSCAPE` folde ```bash -export sap_env_code="DEV" -export region_code="NOEU" -export vnet_code="SAP01" +export sap_env_code="DEV" +export region_code="NOEU" +export sap_vnet_code="SAP01" -cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/${sap_env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE +cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/${sap_env_code}-${region_code}-${sap_vnet_code}-INFRASTRUCTURE ${DEPLOYMENT_REPO_PATH}/deploy/scripts/remover.sh \- --parameterfile ${sap_env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars \ + --parameterfile ${sap_env_code}-${region_code}-${sap_vnet_code}-INFRASTRUCTURE.tfvars \ --type sap_landscape ``` |
search | Index Ranking Similarity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-ranking-similarity.md | -BM25 applies to strings (text) on fields having a "searchable" attribution. At query time, the search engine uses BM25 to calculate a **@searchScore** for each match in a given query. Matching documents are ranked by their search score, with the top results returned in the query response. +BM25 applies to: +++ Queries that use the `search` parameter for full text search, on text fields having a `searchable` attribution.++ Scoring is scoped to `searchFields`, or to all `searchable` fields if `searchFields` is null.++The search engine uses BM25 to calculate a **@searchScore** for each match in a given query. Matching documents are ranked by their search score, with the top results returned in the query response. It's possible to get some [score variation](index-similarity-and-scoring.md#score-variation) in results, even from the same query executing over the same search index, but usually these variations are small and don't change the overall ranking of results. BM25 has defaults for weighting term frequency and document length. You can customize these properties if the defaults aren't suited to your content. Configuration changes are scoped to individual indexes, which means you can adjust relevance scoring based on the characteristics of each index. |
search | Index Similarity And Scoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-similarity-and-scoring.md | Title: Relevance and scoring + Title: BM25 relevance scoring -description: Explains the concepts of relevance and scoring in Azure Cognitive Search, and what a developer can do to customize the scoring result. +description: Explains the concepts of BM25 relevance and scoring in Azure Cognitive Search, and what a developer can do to customize the scoring result. Previously updated : 08/31/2023 Last updated : 09/25/2023 -# Relevance and scoring in Azure Cognitive Search +# BM25 relevance and scoring for full text search -This article explains the relevance and the scoring algorithms used to compute search scores in Azure Cognitive Search. A relevance score is computed for each match found in a [full text search](search-lucene-query-architecture.md), where the strongest matches are assigned higher search scores. +This article explains the BM25 relevance scoring algorithm used to compute search scores for [full text search](search-lucene-query-architecture.md). BM25 relevance is exclusive to full text search. Filter queries, autocomplete and suggested queries, wildcard search or fuzzy search queries aren't scored or ranked for relevance. -Relevance applies to full text search only. Filter queries, autocomplete and suggested queries, wildcard search or fuzzy search queries aren't scored or ranked for relevance. --In Azure Cognitive Search, you can tune search relevance and boost search scores through these mechanisms: +In Azure Cognitive Search, you can configure algorithm parameters, and tune search relevance and boost search scores through these mechanisms: + Scoring algorithm configuration-+ Semantic ranking (in preview, described in [this article](semantic-search-overview.md)) + Scoring profiles++ [Semantic ranking](semantic-search-overview.md) + Custom scoring logic enabled through the *featuresMode* parameter -> [!NOTE] -> Matches are scored and ranked from high to low. The score is returned as "@search.score". By default, the top 50 are returned in the response, but you can use the **$top** parameter to return a smaller or larger number of items (up to 1000 in a single response), and **$skip** to get the next set of results. - ## Relevance scoring -Relevance scoring refers to the computation of a search score that serves as an indicator of an item's relevance in the context of the current query. The higher the score, the more relevant the item. +Relevance scoring refers to the computation of a search score (**@search.score**) that serves as an indicator of an item's relevance in the context of the current query. The range is unbounded. However, the higher the score, the more relevant the item. ++By default, the top 50 highest scoring matches are returned in the response, but you can use the **$top** parameter to return a smaller or larger number of items (up to 1000 in a single response), and **$skip** to get the next set of results. The search score is computed based on statistical properties of the string input and the query itself. Azure Cognitive Search finds documents that match on search terms (some or all, depending on [searchMode](/rest/api/searchservice/search-documents#query-parameters)), favoring documents that contain many instances of the search term. The search score goes up even higher if the term is rare across the data index, but common within the document. The basis for this approach to computing relevance is known as *TF-IDF or* term frequency-inverse document frequency. -Search scores can be repeated throughout a result set. When multiple hits have the same search score, the ordering of the same scored items is undefined and not stable. Run the query again, and you might see items shift position, especially if you are using the free service or a billable service with multiple replicas. Given two items with an identical score, there's no guarantee that one appears first. +Search scores can be repeated throughout a result set. When multiple hits have the same search score, the ordering of the same scored items is undefined and not stable. Run the query again, and you might see items shift position, especially if you're using the free service or a billable service with multiple replicas. Given two items with an identical score, there's no guarantee that one appears first. -If you want to break the tie among repeating scores, you can add an **$orderby** clause to first order by score, then order by another sortable field (for example, `$orderby=search.score() desc,Rating desc`). For more information, see [$orderby](search-query-odata-orderby.md). +To break the tie among repeating scores, you can add an **$orderby** clause to first order by score, then order by another sortable field (for example, `$orderby=search.score() desc,Rating desc`). For more information, see [$orderby](search-query-odata-orderby.md). > [!NOTE] > A `@search.score = 1` indicates an un-scored or un-ranked result set. The score is uniform across all results. Un-scored results occur when the query form is fuzzy search, wildcard or regex queries, or an empty search (`search=*`, sometimes paired with filters, where the filter is the primary means for returning a match). For scalability, Azure Cognitive Search distributes each index horizontally thro By default, the score of a document is calculated based on statistical properties of the data *within a shard*. This approach is generally not a problem for a large corpus of data, and it provides better performance than having to calculate the score based on information across all shards. That said, using this performance optimization could cause two very similar documents (or even identical documents) to end up with different relevance scores if they end up in different shards. -If you prefer to compute the score based on the statistical properties across all shards, you can do so by adding *scoringStatistics=global* as a [query parameter](/rest/api/searchservice/search-documents) (or add *"scoringStatistics": "global"* as a body parameter of the [query request](/rest/api/searchservice/search-documents)). +If you prefer to compute the score based on the statistical properties across all shards, you can do so by adding `scoringStatistics=global` as a [query parameter](/rest/api/searchservice/search-documents) (or add `"scoringStatistics": "global"` as a body parameter of the [query request](/rest/api/searchservice/search-documents)). ```http POST https://[service name].search.windows.net/indexes/hotels/docs/search?api-version=2020-06-30 POST https://[service name].search.windows.net/indexes/hotels/docs/search?api-ve } ``` -Using scoringStatistics will ensure that all shards in the same replica provide the same results. That said, different replicas may be slightly different from one another as they are always getting updated with the latest changes to your index. In some scenarios, you may want your users to get more consistent results during a "query session". In such scenarios, you can provide a `sessionId` as part of your queries. The `sessionId` is a unique string that you create to refer to a unique user session. +Using `scoringStatistics` will ensure that all shards in the same replica provide the same results. That said, different replicas may be slightly different from one another as they're always getting updated with the latest changes to your index. In some scenarios, you may want your users to get more consistent results during a "query session". In such scenarios, you can provide a `sessionId` as part of your queries. The `sessionId` is a unique string that you create to refer to a unique user session. ```http POST https://[service name].search.windows.net/indexes/hotels/docs/search?api-version=2020-06-30 POST https://[service name].search.windows.net/indexes/hotels/docs/search?api-ve } ``` -As long as the same `sessionId` is used, a best-effort attempt will be made to target the same replica, increasing the consistency of results your users will see. +As long as the same `sessionId` is used, a best-effort attempt is made to target the same replica, increasing the consistency of results your users will see. > [!NOTE] > Reusing the same `sessionId` values repeatedly can interfere with the load balancing of the requests across replicas and adversely affect the performance of the search service. The value used as sessionId cannot start with a '_' character. A scoring profile is part of the index definition, composed of weighted fields, ## featuresMode parameter (preview) -[Search Documents](/rest/api/searchservice/preview-api/search-documents) requests have a new [featuresMode](/rest/api/searchservice/preview-api/search-documents#featuresmode) parameter that can provide additional detail about relevance at the field level. Whereas the `@searchScore` is calculated for the document all-up (how relevant is this document in the context of this query), through featuresMode you can get information about individual fields, as expressed in a `@search.features` structure. The structure contains all fields used in the query (either specific fields through **searchFields** in a query, or all fields attributed as **searchable** in an index). For each field, you get the following values: +[Search Documents](/rest/api/searchservice/preview-api/search-documents) requests have a new [featuresMode](/rest/api/searchservice/preview-api/search-documents#featuresmode) parameter that can provide more detail about relevance at the field level. Whereas the `@searchScore` is calculated for the document all-up (how relevant is this document in the context of this query), through featuresMode you can get information about individual fields, as expressed in a `@search.features` structure. The structure contains all fields used in the query (either specific fields through **searchFields** in a query, or all fields attributed as **searchable** in an index). For each field, you get the following values: + Number of unique tokens found in the field + Similarity score, or a measure of how similar the content of the field is, relative to the query term For a query that targets the "description" and "title" fields, a response that i "similarityScore": 1.75451557, "termFrequency" : 6 }+ } + } +] ``` You can consume these data points in [custom scoring solutions](https://github.com/Azure-Samples/search-ranking-tutorial) or use the information to debug search relevance problems. |
search | Search Query Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-create.md | Title: Create a query + Title: Full-text query -description: Learn how to construct a query request in Cognitive Search, which tools and APIs to use for testing and code, and how query decisions start with index design. +description: Learn how to construct a query request for full text search in Azure Cognitive Search. - Previously updated : 03/22/2023+ Last updated : 09/25/2023 -# Creating queries in Azure Cognitive Search +# Create a full-text query in Azure Cognitive Search -If you're building a query for the first time, this article describes approaches and methods for setting up the request. It also introduces a query structure, and explains how field attributes and linguistic analyzers can impact query outcomes. +If you're building a query for [full text search](search-lucene-query-architecture.md), this article provides steps for setting up the request. It also introduces a query structure, and explains how field attributes and linguistic analyzers can impact query outcomes. -## What's a query request? +## Prerequisites -A query is a read-only request against the docs collection of a single search index. It specifies a 'search' parameter, which contains the query expression consisting of terms, quote-enclosed phrases, and operators. ++ A [search index](search-how-to-create-search-index.md) with string fields attributed as `searchable`. -Other parameters on the request provide more definition to the query and response. For example, 'searchFields' scopes query execution to specific fields, 'select' specifies which fields are returned in results, and 'count' returns the number of matches found in the index. ++ Read permissions on the search index. For read access, include a [query API key](search-security-api-keys.md) on the request, or give the caller [Search Index Data Reader](search-security-rbac.md) permissions. -The following example gives you a general idea of a query request by showing some of the available parameters. For more information about query composition, see [Query types and compositions](search-query-overview.md) and [Search Documents (REST)](/rest/api/searchservice/search-documents). +## Example of a full text query request ++In Azure Cognitive Search, a query is a read-only request against the docs collection of a single search index. ++A full text query is specified in a `search` parameter and consists of terms, quote-enclosed phrases, and operators. Other parameters add more definition. For example, `searchFields` scopes query execution to specific fields, `select` specifies which fields are returned in results, and `count` returns the number of matches found in the index. ++The following [Search Documents REST API](/rest/api/searchservice/search-documents) call illustrates a query request using the aforementioned parameters. ```http POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/search?api-version=2020-06-30 POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/ ## Choose a client -For early development and proof-of-concept testing, we recommend starting with an interactive tool like Azure portal, or the Postman app for making REST API calls. With these approaches, you can test a query request in isolation and assess the effects of different properties without having to write any code. +For early development and proof-of-concept testing, start with Azure portal or the Postman app for making REST API calls. These approaches are interactive, useful for targeted testing, and help you assess the effects of different properties without having to write any code. ++To call search from within an app, use the **Azure.Document.Search** client libraries in the Azure SDKs for .NET, Java, JavaScript, and Python. ++### [**Azure portal**](#tab/portal-text-query) ++In the portal, when you open an index, you can work with Search Explorer alongside the index JSON definition in side-by-side tabs for easy access to field attributes. Check the **Fields** table to see which ones are searchable, sortable, filterable, and facetable while testing queries. -To call search from within an app, we recommend the Azure.Document.Search client libraries in the Azure SDKs for .NET, Java, JavaScript, and Python. +1. Sign in to the [Azure portal](https://portal.azure.com) and find your search service. -### Permissions +1. Open **Indexes** and select an index. -A query request requires read permissions, granted via an API key passed in the header. Any operation, including query requests, will work under an [admin API key](search-security-api-keys.md), but query requests can optionally use a [query API key](search-security-api-keys.md#create-query-keys). Query API keys are strongly recommended. You can create up to 50 per service and assign different keys to different applications. +1. An index opens to the [**Search explorer**](search-explorer.md) tab so that you can query right away. A query string can use simple or full syntax, with support for all query parameters (filter, select, searchFields, and so on). -In Azure portal, access to the built-in tools, wizards, and objects require membership in the Contributor role or higher on the search service. + Here's a full text search query expression that works for the Hotels sample index: -### Use Azure portal to query an index + `search=pool spa +airport&$searchFields=Description,Tags&$select=HotelName,Description,Category&$count=true` -[Search explorer (portal)](search-explorer.md) is a query interface in the Azure portal that runs queries against indexes on the underlying search service. Internally, the portal makes [Search Documents](/rest/api/searchservice/search-documents) requests, but can't invoke Autocomplete, Suggestions, or Document Lookup. + The following screenshot illustrates the query and response: -You can select any index and REST API version, including preview. A query string can use simple or full syntax, with support for all query parameters (filter, select, searchFields, and so on). In the portal, when you open an index, you can work with Search Explorer alongside the index JSON definition in side-by-side tabs for easy access to field attributes. Check what fields are searchable, sortable, filterable, and facetable while testing queries. + :::image type="content" source="media/search-explorer/search-explorer-full-text-query-hotels.png" alt-text="Screenshot of Search Explorer with a full text query."::: -### Use a REST client +Notice that you can change the REST API version if you require search behaviors from a specific version, or switch to **JSON view** if you want to paste in the JSON definition of a query. For more information about what a JSON definition looks like, see [Search Documents (REST)](/rest/api/searchservice/search-documents). -The [Postman app](https://www.postman.com/downloads/) can function as a query client. Using the app, you can connect to your search service and send [Search Documents (REST)](/rest/api/searchservice/search-documents) requests. Numerous tutorials and examples demonstrate REST clients for querying indexing. +### [**REST API**](#tab/rest-text-query) ++[Postman app](https://www.postman.com/downloads/) is useful for working with the REST APIs, such as [Search Documents (REST)](/rest/api/searchservice/search-documents). Start with [Create a search index using REST and Postman](search-get-started-rest.md) for step-by-step instructions for setting up requests. -Each request is standalone, so you must provide the endpoint, index name, and API version on every request. Other properties, Content-Type and API key, are passed on the request header. For more information, see [Search Documents (REST)](/rest/api/searchservice/search-documents) for help with formulating query requests. +The following example calls the REST API for full text search: -### Use an Azure SDK +```http +POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/search?api-version=2020-06-30 +{ + "search": "NY +view", + "queryType": "simple", + "searchMode": "all", + "searchFields": "HotelName, Description, Address/City, Address/StateProvince, Tags", + "select": "HotelName, Description, Address/City, Address/StateProvince, Tags", + "count": "true" +} +``` ++### [**Azure SDKs**](#tab/sdk-text-query) -For Cognitive Search, the Azure SDKs implement generally available features. As such, you can use any of the SDKs to query an index. All of them provide a **SearchClient** that has methods to interacting with an index, from loading an index with search documents, to formulating query requests. +The following Azure SDKs provide a **SearchClient** that has methods for formulating query requests. | Azure SDK | Client | Examples | |--|--|-| | .NET | [SearchClient](/dotnet/api/azure.search.documents.searchclient) | [DotNetHowTo](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowTo) | | Java | [SearchClient](/java/api/com.azure.search.documents.searchclient) | [SearchForDynamicDocumentsExample.java](https://github.com/Azure/azure-sdk-for-java/blob/azure-search-documents_11.1.3/sdk/search/azure-search-documents/src/samples/java/com/azure/search/documents/SearchForDynamicDocumentsExample.java) |-| JavaScript | [SearchClient](/javascript/api/@azure/search-documents/searchclient) | Pending. | +| JavaScript | [SearchClient](/javascript/api/@azure/search-documents/searchclient) | [SDK examples](/javascript/api/overview/azure/search-documents-readme?view=azure-node-latest#examples&preserve-view=true) | | Python | [SearchClient](/python/api/azure-search-documents/azure.search.documents.searchclient) | [sample_simple_query.py ](https://github.com/Azure/azure-sdk-for-python/blob/7cd31ac01fed9c790cec71de438af9c45cb45821/sdk/search/azure-search-documents/samples/sample_simple_query.py) | ++ ## Choose a query type: simple | full If your query is full text search, a query parser is used to process any text that's passed as search terms and phrases. Azure Cognitive Search offers two query parsers. Search is fundamentally a user-driven exercise, where terms or phrases are colle ## Effect of field attributes on queries -If you're familiar with [query types and composition](search-query-overview.md), you might remember that the parameters on a query request depend on field attributes in an index. For example, only fields marked as *searchable* and *retrievable* can be used in queries and search results. When setting the `search`, `filter`, and `orderby` parameters in your request, you should check attributes to avoid unexpected results. +If you're familiar with [query types and composition](search-query-overview.md), you might remember that the parameters on a query request depend on field attributes in an index. For example, only fields marked as `searchable` and `retrievable` can be used in queries and search results. When setting the `search`, `filter`, and `orderby` parameters in your request, you should check attributes to avoid unexpected results. -In the portal screenshot below of the [hotels sample index](search-get-started-portal.md), only the last two fields "LastRenovationDate" and "Rating" can be used in an `"$orderby"` only clause. +In the portal screenshot below of the [hotels sample index](search-get-started-portal.md), only the last two fields "LastRenovationDate" and "Rating" are `sortable`, a requirement for use in an `"$orderby"` only clause. ![Index definition for the hotel sample](./media/search-query-overview/hotel-sample-index-definition.png "Index definition for the hotel sample") |
search | Search Query Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-overview.md | -Azure Cognitive Search offers a rich query language to support a broad range of scenarios, from free text search, to highly-specified query patterns. This article describes query requests and the kinds of queries you can create. +Azure Cognitive Search offers a rich query language to support a broad range of scenarios, from free text search, to highly specified query patterns. This article describes query requests and the kinds of queries you can create. In Cognitive Search, a query is a full specification of a round-trip **`search`** operation, with parameters that both inform query execution and shape the response coming back. To illustrate, the following query example calls the [Search Documents (REST API)](/rest/api/searchservice/search-documents). It's a parameterized, free text query with a boolean operator, targeting the [hotels-sample-index](search-get-started-portal.md) documents collection. It also selects which fields are returned in results. The above list is representative but not exhaustive. For the full list of parame ## Types of queries -With a few notable exceptions, a query request iterates over inverted indexes that are structured for fast scans, where a match can be found in potentially any field, within any number of search documents. In Cognitive Search, the primary methodology for finding matches is either full text search or filters, but you can also implement other well-known search experiences like autocomplete, or geo-location search. The rest of this article summarizes queries in Cognitive Search and provides links to more information and examples. +With a few notable exceptions, a full text query request iterates over inverted indexes that are structured for fast scans, where a match can be found in potentially any field, within any number of search documents. In Cognitive Search, the primary methodology for finding matches is either full text search or filters, but you can also implement other well-known search experiences like autocomplete, or geo-location search. The rest of this article summarizes queries in Cognitive Search and provides links to more information and examples. ## Full text search -If your search app includes a search box that collects term inputs, then full text search is probably the query operation backing that experience. Full text search accepts terms or phrases passed in a **`search`** parameter in all "searchable" fields in your index. Optional boolean operators in the query string can specify inclusion or exclusion criteria. Both the simple parser and full parser support full text search. +Full text search accepts terms or phrases passed in a **`search`** parameter in all "searchable" fields in your index. Optional boolean operators in the query string can specify inclusion or exclusion criteria. Both the simple parser and full parser support full text search. -In Cognitive Search, full text search is built on the Apache Lucene query engine. Query strings in full text search undergo lexical analysis to make scans more efficient. Analysis includes lower-casing all terms, removing stop words like "the", and reducing terms to primitive root forms. The default analyzer is Standard Lucene. +In Cognitive Search, full text search is built on the Apache Lucene query engine. Query strings in full text search undergo lexical analysis to make scans more efficient. Analysis includes lower-casing all terms, removing stop words like "the" and reducing terms to primitive root forms. The default analyzer is Standard Lucene. When matching terms are found, the query engine reconstitutes a search document containing the match using the document key or ID to assemble field values, ranks the documents in order of relevance, and returns the top 50 (by default) in the response or a different number if you specified **`top`**. If you're implementing full text search, understanding how your content is token ## Autocomplete and suggested queries -[Autocomplete or suggested results](search-add-autocomplete-suggestions.md) are alternatives to **`search`** that fire successive query requests based on partial string inputs (after each character) in a search-as-you-type experience. You can use **`autocomplete`** and **`suggestions`** parameter together or separately, as described in [this tutorial](tutorial-csharp-type-ahead-and-suggestions.md), but you cannot use them with **`search`**. Both completed terms and suggested queries are derived from index contents. The engine will never return a string or suggestion that is non-existent in your index. For more information, see [Autocomplete (REST API)](/rest/api/searchservice/autocomplete) and [Suggestions (REST API)](/rest/api/searchservice/suggestions). +[Autocomplete or suggested results](search-add-autocomplete-suggestions.md) are alternatives to **`search`** that fire successive query requests based on partial string inputs (after each character) in a search-as-you-type experience. You can use **`autocomplete`** and **`suggestions`** parameter together or separately, as described in [this tutorial](tutorial-csharp-type-ahead-and-suggestions.md), but you can't use them with **`search`**. Both completed terms and suggested queries are derived from index contents. The engine never returns a string or suggestion that is nonexistent in your index. For more information, see [Autocomplete (REST API)](/rest/api/searchservice/autocomplete) and [Suggestions (REST API)](/rest/api/searchservice/suggestions). ## Filter search An advanced query form depends on the Full Lucene parser and operators that trig ## Next steps -For a closer look at query implementation, review the examples for each syntax. If you are new to full text search, a closer look at what the query engine does might be an equally good choice. +For a closer look at query implementation, review the examples for each syntax. If you're new to full text search, a closer look at what the query engine does might be an equally good choice. + [Simple query examples](search-query-simple-examples.md) + [Lucene syntax query examples for building advanced queries](search-query-lucene-examples.md) |
security | Secure Design | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/secure-design.md | description: This article discusses best practices to consider during the requir Previously updated : 02/06/2023 Last updated : 09/26/2023 Use the following resources during the training stage to familiarize yourself wi * [Secure DevOps Kit for Azure](https://github.com/azsk/AzTS-docs/#readme) is a collection of scripts, tools, extensions, and automations that cater to the comprehensive Azure subscription and resource security needs of DevOps teams that use extensive automation. The Secure DevOps Kit for Azure can show you how to smoothly integrate security into your native DevOps workflows. The kit addresses tools like security verification tests (SVTs), which can help developers write secure code and test the secure configuration of their cloud applications in the coding and early development stages. -* [Security best practices for Azure solutions](https://azure.microsoft.com/resources/security-best-practices-for-azure-solutions) provides a collection of security best practices to use as you design, deploy, and manage your cloud solutions by using Azure. +* [Azure security best practices and patterns](../fundamentals/best-practices-and-patterns.md) - A collection of security best practices to use when you design, deploy, and manage cloud solutions by using Azure. Guidance is intended to be a resource for IT pros. This might include designers, architects, developers, and testers who build and deploy secure Azure solutions. ## Requirements -The requirements definition phase is a crucial step in defining what your application is and what it will do when it's released. The requirements phase is also a time to think about the security controls that you'll build into your application. During this phase, you also begin the steps that you'll take throughout the SDL to ensure that you release and deploy a secure application. +The requirements definition phase is a crucial step in defining what your application is and what it does when it's released. The requirements phase is also a time to think about the security controls that you build into your application. During this phase, you also begin the steps that you take throughout the SDL to ensure that you release and deploy a secure application. ### Consider security and privacy issues Ask security questions like: * Does my application collect or contain sensitive personal or customer data that can be used, either on its own or with other information, to identify, contact, or locate a single person? -* Does my application collect or contain data that can be used to access an individual's medical, educational, financial, or employment information? Identifying the sensitivity of your data during the requirements phase helps you classify your data and identify the data protection method you'll use for your application. +* Does my application collect or contain data that can be used to access an individual's medical, educational, financial, or employment information? Identifying the sensitivity of your data during the requirements phase helps you classify your data and identify the data protection method you use for your application. -* Where and how is my data stored? Consider how you'll monitor the storage services that your application uses for any unexpected changes (such as slower response times). Will you be able to influence logging to collect more detailed data and analyze a problem in depth? +* Where and how is my data stored? Consider how you monitor the storage services that your application uses for any unexpected changes (such as slower response times). Are you able to influence logging to collect more detailed data and analyze a problem in depth? -* Will my application be available to the public (on the internet) or internally only? If your application is available to the public, how do you protect the data that might be collected from being used in the wrong way? If your application is available internally only, consider who in your organization should have access to the application and how long they should have access. +* Is my application available to the public (on the internet) or internally only? If your application is available to the public, how do you protect the data that might be collected from being used in the wrong way? If your application is available internally only, consider who in your organization should have access to the application and how long they should have access. -* Do you understand your identity model before you begin designing your application? How will you determine that users are who they say they are and what a user is authorized to do? +* Do you understand your identity model before you begin designing your application? Can you determine that users are who they say they are and what a user is authorized to do? -* Does my application perform sensitive or important tasks (such as transferring money, unlocking doors, or delivering medicine)? Consider how you'll validate that the user performing a sensitive task is authorized to perform the task and how you'll authenticate that the person is who they say they are. Authorization (AuthZ) is the act of granting an authenticated security principal permission to do something. Authentication (AuthN) is the act of challenging a party for legitimate credentials. +* Does my application perform sensitive or important tasks (such as transferring money, unlocking doors, or delivering medicine)? Consider how you validate that the user performing a sensitive task is authorized to perform the task and how you authenticate that the person is who they say they are. Authorization (AuthZ) is the act of granting an authenticated security principal permission to do something. Authentication (AuthN) is the act of challenging a party for legitimate credentials. -* Does my application perform any risky software activities, like allowing users to upload or download files or other data? If your application does perform risky activities, consider how your application will protect users from handling malicious files or data. +* Does my application perform any risky software activities, like allowing users to upload or download files or other data? If your application does perform risky activities, consider how your application protects users from handling malicious files or data. ### Review OWASP top 10 Consider reviewing the [<span class="underline">OWASP Top 10 Application Securit Thinking about security controls to prevent breaches is important. However, you also want to [assume a breach](/devops/operate/security-in-devops) will occur. Assuming a breach helps answer some important questions about security in advance, so they don't have to be answered in an emergency: -* How will I detect an attack? -* What will I do if there's an attack or breach? +* How am I going to detect an attack? +* What am I going to do if there's an attack or breach? * How am I going to recover from the attack like data leaking or tampering? ## Design against security-related design and implementation flaws. Be sure that you're using the latest version of your framework and all the security features that are available in the framework. Microsoft offers a comprehensive [set of development tools](https://azure.microsoft.com/product-categories/developer-tools/) for all developers, working on any platform or language, to deliver cloud applications. You can code with the language of your choice by choosing from various [SDKs](https://azure.microsoft.com/downloads/). You can take advantage of full-featured integrated development environments (IDEs) and editors that have advanced debugging capabilities and built-in Azure support. -Microsoft offers various [languages, frameworks, and tools](/azure/?panel=sdkstools-all&pivot=sdkstools&product=popular#languages-and-tools) that you can use to develop applications on Azure. An example is [Azure for .NET and .NET Core developers](/dotnet/azure/). For each language and framework that we offer, you'll find quickstarts, tutorials, and API references to help you get started fast. +Microsoft offers various [languages, frameworks, and tools](/azure/?panel=sdkstools-all&pivot=sdkstools&product=popular#languages-and-tools) that you can use to develop applications on Azure. An example is [Azure for .NET and .NET Core developers](/dotnet/azure/). For each language and framework that we offer, you can find quickstarts, tutorials, and API references to help you get started fast. Azure offers various services you can use to host websites and web applications. These services let you develop in your favorite language, whether that's .NET, .NET Core, Java, Ruby, Node.js, PHP, or Python. [Azure App Service Web Apps](../../app-service/overview.md) (Web Apps) is one of these services. threat modeling during the design phase, when resolving potential issues is relatively easy and cost-effective. Using threat modeling in the design phase can greatly reduce your total cost of development. -To help facilitate the threat modeling process, we designed the [SDL Threat Modeling Tool](threat-modeling-tool.md) with non-security experts in mind. This tool makes threat modeling easier for all developers by providing clear guidance about how to create and analyze threat models. +To help facilitate the threat modeling process, we designed the [SDL Threat Modeling Tool](threat-modeling-tool.md) with nonsecurity experts in mind. This tool makes threat modeling easier for all developers by providing clear guidance about how to create and analyze threat models. -Modeling the application design and enumerating [STRIDE](https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxzZWN1cmVwcm9ncmFtbWluZ3xneDo0MTY1MmM0ZDI0ZjQ4ZDMy) threats-Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege-across all trust boundaries has proven an effective way to catch design errors early on. The following table lists the STRIDE threats and gives some example mitigations that use features provided by Azure. These mitigations won't work in every situation. +Modeling the application design and enumerating [STRIDE](https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxzZWN1cmVwcm9ncmFtbWluZ3xneDo0MTY1MmM0ZDI0ZjQ4ZDMy) threats-Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege-across all trust boundaries has proven an effective way to catch design errors early. The following table lists the STRIDE threats and gives some example mitigations that use features provided by Azure. These mitigations don't work in every situation. | Threat | Security property | Potential Azure platform mitigation | | - | | | security perimeter focus from a network-centric approach to an identity-centric approach. Historically, the primary on-premises security perimeter was an organization's network. Most on-premises security designs use the network as the primary security pivot. For-cloud applications, you are better served by considering identity as the +cloud applications, you're better served by considering identity as the primary security perimeter. Things you can do to develop an identity-centric approach to developing web applications: -* Enforce multi-factor authentication for users. +* Enforce multifactor authentication for users. * Use strong authentication and authorization platforms. * Apply the principle of least privilege. * Implement just-in-time access. -#### Enforce multi-factor authentication for users +#### Enforce multifactor authentication for users -Use two-factor authentication. Two-factor authentication is the current standard for authentication and authorization because it avoids the security weaknesses that are inherent in username and password types of authentication. Access to the Azure management interfaces (Azure portal/remote PowerShell) and to customer-facing services should be designed and configured to use [Azure AD Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md). +Use two-factor authentication. Two-factor authentication is the current standard for authentication and authorization because it avoids the security weaknesses that are inherent in username and password types of authentication. Access to the Azure management interfaces (Azure portal/remote PowerShell) and to customer-facing services should be designed and configured to use [Azure AD Multifactor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md). #### Use strong authentication and authorization platforms Use platform-supplied authentication and authorization mechanisms instead of custom code. This is because developing custom authentication code can be prone to error. Commercial code (for example, from Microsoft) often is extensively reviewed for security. [Azure Active Directory](../../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) is the Azure solution for identity and access management. These Azure AD tools and services help with secure development: -* [Microsoft identity platform](../../active-directory/develop/index.yml) is a set of components that developers use to build apps that securely sign in users. The platform assists developers who are building single-tenant, line-of-business (LOB) apps and developers who are looking to develop multi-tenant apps. In addition to basic sign-in, apps built by using the Microsoft identity platform can call Microsoft APIs and custom APIs. The Microsoft identity platform supports industry-standard protocols like OAuth 2.0 and OpenID Connect. +* [Microsoft identity platform](../../active-directory/develop/index.yml) is a set of components that developers use to build apps that securely sign in users. The platform assists developers who are building single-tenant, line-of-business (LOB) apps and developers who are looking to develop multitenant apps. In addition to basic sign-in, apps built by using the Microsoft identity platform can call Microsoft APIs and custom APIs. The Microsoft identity platform supports industry-standard protocols like OAuth 2.0 and OpenID Connect. -* [Azure Active Directory B2C](../../active-directory-b2c/index.yml) (Azure AD B2C) is an identity management service you can use to customize and control how customers sign up, sign in, and manage their profiles when they use your applications. This includes applications that are developed for iOS, Android, and .NET, among others. Azure AD B2C enables these actions while protecting customer identities. +* [Azure Active Directory B2C](../../active-directory-b2c/index.yml) (Azure AD B2C) is an identity management service you use to customize and control how customers sign up, sign in, and manage their profiles when they use your applications. This includes applications that are developed for iOS, Android, and .NET, among others. Azure AD B2C enables these actions while protecting customer identities. #### Apply the principle of least privilege Some things should never be hard-coded in your software. Some examples are hostn When you put comments in your code, ensure that you don't save any sensitive information. This includes your email address, passwords, connection strings, information about your application that would only be known by someone in your organization, and anything else that might give an attacker an advantage in attacking your application or organization. -Basically, assume that everything in your development project will be public knowledge when it's deployed. Avoid including sensitive data of any kind in the project. +Basically, assume that everything in your development project is public knowledge when it's deployed. Avoid including sensitive data of any kind in the project. Earlier, we discussed [Azure Key Vault](../../key-vault/general/overview.md). You can use Key Vault to store secrets like keys and passwords instead of hard-coding them. When you use Key Vault in combination with managed identities for Azure resources, your Azure web app can access secret configuration values easily and securely without storing any secrets in your source control or configuration. To learn more, see [Manage secrets in your server apps with Azure Key Vault](/training/modules/manage-secrets-with-azure-key-vault/). Earlier, we discussed [Azure Key Vault](../../key-vault/general/overview.md). Yo Your application must be able to handle [errors](/dotnet/standard/exceptions/) that occur during execution in a consistent manner. The application should catch all errors and either fail safe or closed. -You should also ensure that errors are logged with sufficient user context to identify suspicious or malicious activity. Logs should be retained for a sufficient time to allow delayed forensic analysis. Logs should be in a format that can be easily consumed by a log management solution. Ensure that alerts for errors that are related to security are triggered. Insufficient logging and monitoring allow attackers to further attack systems and maintain persistence. +You should also ensure that errors are logged with sufficient user context to identify suspicious or malicious activity. Logs should be retained for a sufficient time to allow delayed forensic analysis. Logs should be in a format that is easily consumed by a log management solution. Ensure that alerts for errors related to security are triggered. Insufficient logging and monitoring allow attackers to further attack systems and maintain persistence. ### Take advantage of error and exception handling Ensure that: * Exceptions are logged and that they provide enough information for forensics or incident response teams to investigate. -[Azure Logic Apps](../../logic-apps/logic-apps-overview.md) provides a first-class experience for [handling errors and exceptions](../../logic-apps/logic-apps-exception-handling.md) that are caused by dependent systems. You can use Logic Apps to create workflows to automate tasks and processes that integrate apps, data, systems, and services across enterprises and organizations. +[Azure Logic Apps](../../logic-apps/logic-apps-overview.md) provides a first-class experience for [handling errors and exceptions](../../logic-apps/logic-apps-exception-handling.md) caused by dependent systems. You can use Logic Apps to create workflows to automate tasks and processes that integrate apps, data, systems, and services across enterprises and organizations. ### Use logging and alerting |
security | Secure Dev Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/secure-dev-overview.md | description: Best practices to help you develop more secure code and deploy a mo Previously updated : 02/06/2023 Last updated : 09/26/2023 Security is one of the most important aspects of any application, and it's not a Following best practices for secure software development requires integrating security into each phase of the software development lifecycle, from requirement analysis to maintenance, regardless of the project methodology ([waterfall](https://en.wikipedia.org/wiki/Waterfall_model), [agile](https://en.wikipedia.org/wiki/Agile_software_development), or [DevOps](https://en.wikipedia.org/wiki/DevOps)). In the wake of high-profile data breaches and the exploitation of operational security flaws, more developers are understanding that security needs to be addressed throughout the development process. -The later you fix a problem in your development lifecycle, the more that fix will cost you. Security issues are no exception. If you disregard security issues in the early phases of your software development, each phase that follows might inherit the vulnerabilities of the preceding phase. Your final product will have accumulated multiple security issues and the possibility of a breach. Building security into each phase of the development lifecycle helps you catch issues early, and it helps you reduce your development costs. +The later you fix a problem in your development lifecycle, the more that fix costs you. Security issues are no exception. If you disregard security issues in the early phases of your software development, each phase that follows might inherit the vulnerabilities of the preceding phase. Your final product accumulates multiple security issues and the possibility of a breach. Building security into each phase of the development lifecycle helps you catch issues early, and it helps you reduce your development costs. We follow the phases of the [Microsoft Security Development Lifecycle](https://www.microsoft.com/securityengineering/sdl/) (SDL) to introduce activities and Azure services that you can use to fulfill secure software development practices in each phase of the lifecycle. applications and to help secure your applications on Azure: [Microsoft identity platform](../../active-directory/develop/index.yml) - The Microsoft identity platform is an evolution of the Azure AD identity service and developer platform. It's a full-featured platform that consists of an authentication service, open-source libraries, application registration and configuration, full developer documentation, code samples, and other developer content. The Microsoft identity platform supports industry-standard protocols like OAuth 2.0 and OpenID Connect. -[Security best practices for Azure solutions](https://azure.microsoft.com/resources/security-best-practices-for-azure-solutions/) - A collection of security best practices to use when you design, deploy, and manage cloud solutions by using Azure. This paper is intended to be a resource for IT pros. This might include designers, architects, developers, and testers who build and deploy secure Azure solutions. +[Azure security best practices and patterns](../fundamentals/best-practices-and-patterns.md) - A collection of security best practices to use when you design, deploy, and manage cloud solutions by using Azure. Guidance is intended to be a resource for IT pros. This might include designers, architects, developers, and testers who build and deploy secure Azure solutions. [Security and Compliance Blueprints on Azure](../../governance/blueprints/samples/azure-security-benchmark-foundation/index.md) - Azure Security and Compliance Blueprints are resources that can help you build and launch cloud-powered applications that comply with stringent regulations and standards. |
spring-apps | Quotas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quotas.md | The following table defines limits for the pricing plans in Azure Spring Apps. | Azure Spring Apps service instances | per region per subscription | 10 | 10 | 10 | 10 | 10 | | Total app instances | per Azure Spring Apps service instance | 25 | 500 | 1000 | 400 | 1000 | | Custom Domains for app | per Azure Spring Apps service instance | 0 | 500 | 500 | 500 | 500 |+| Custom Domains for app | per app instance | 0 | 5 | 5 | 5 | 5 | | Custom Domains for Tanzu Component | per Tanzu Component | N/A | N/A | 5 | N/A | N/A | | Persistent volumes | per Azure Spring Apps service instance | 1 GB/app x 10 apps | 50 GB/app x 10 apps | 50 GB/app x 10 apps | Not applicable | Not applicable | | Inbound Public Endpoints | per Azure Spring Apps service instance | 10 <sup>1</sup> | 10 <sup>1</sup> | 10 <sup>1</sup> | 10 <sup>1</sup> | 10 <sup>1</sup> | |
storage | Blob Upload Function Trigger Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-upload-function-trigger-javascript.md | Although the Azure Function code runs locally, it connects to the cloud-based se "IsEncrypted": false, "Values": { "FUNCTIONS_WORKER_RUNTIME": "node",- "AzureWebJobsFeatureFlags": "EnableWorkerIndexing", "AzureWebJobsStorage": "", "StorageConnection": "STORAGE-CONNECTION-STRING", "StorageAccountName": "STORAGE-ACCOUNT-NAME", |
storage | Data Lake Storage Migrate Gen1 To Gen2 Azure Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-migrate-gen1-to-gen2-azure-portal.md | The following functionality isn't supported in the compatibility layer. ## Frequently asked questions +#### How long will migration take? ++Data and metadata are migrated in parallel. The total time required to complete a migration is equal to whichever of these two processes complete last. ++The following table shows the approximate speed of each migration processing task. ++> [!NOTE] +> These time estimates are approximate and can vary. For example, copying a large number of small files can slow performance. ++| Processing task | Speed | +|-|| +| Data copy | 9 TB per hour | +| Data validation | 9 million files per hour | +| Metadata copy | 4 million files and folders per hour | +| Metadata processing | 25 million files and folders per hour | +| Additional metadata processing (data copy option)<sup>1</sup> | 50 million files and folders per hour | ++<sup>1</sup> The additional metadata processing time applies only if you choose the **Copy data to a new Gen2 account** option. This processing time does not apply if you choose the **Complete migration to a new gen2 account** option. ++##### Example: Processing a large amount of data and metadata ++This example assumes **300 TB** of data and **200 million** data and metadata items. ++| Task | Estimated time | +|--|--| +| Copy data | 300 TB / 9 TB = 33.33 hours | +| Validate data | 200 million / 9 million = 22.22 hours| +| **Total data migration time** | **33.33 + 22.2 = 55.55 hours** | +| Copy metadata | 200 million / 4 million = 50 hours | +| Metadata processing | 200 million / 25 million = 8 hours | +| Additional metadata processing - data copy option only | 200 million / 50 million = 4 hours | +| **Total metadata migration time** | **50 + 8 + 4 = 62 hours** | +| **Total time to perform a data-only migration** | **62 hours** | +| **Total time to perform a complete migration**| **62 - 4 = 58 hours** | ++##### Example: Processing a small amount of data and metadata ++This example assumes that **2 TB** of data and **56 thousand** data and metadata items. ++| Task | Estimated time | +|--|--| +| Copy data | (2 TB / 9 TB) * 60 minutes = 13.3 minutes| +| Validate data | (56,000 / 9 million) * 3,600 seconds = 22.4 seconds | +| **Total data migration time** | **13.3 minutes + 22.4 seconds = approximately 14 minutes** | +| Copy metadata | (56,000 / 4 million) * 3,600 seconds = approximately 51 seconds | +| Metadata processing | 56,000/ 25 million = 8 seconds | +| Additional metadata processing - data copy option only | (56,000 / 50 million) * 3,600 seconds = 4 seconds| +| **Total metadata migration time** | **51 + 8 + 4 = 63 seconds** | +| **Total time to perform a data-only migration** | **14 minutes** | +| **Total time to perform a complete migration** | **14 minutes - 4 seconds = 13 minutes and 56 seconds (approximately 14 minutes)** | + #### How much does the data migration cost? There's no cost to use the portal-based migration tool, however you'll be billed for usage of Azure Data Lake Gen1 and Gen2 services. During the data migration, you'll be billed for the data storage and transactions of the Gen1 account. |
storage | Map Rest Apis Transaction Categories | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/map-rest-apis-transaction-categories.md | + + Title: Map each REST operation to a price - Azure Blob Storage +description: Find the operation type of each REST operation so that you can identify the price of an operation. ++++ Last updated : 09/25/2023+++++# Map each REST operation to a price ++This article helps you find the price of each REST operation that clients can execute against the Azure Blob Storage service. ++Each request made by tools such as AzCopy or Azure Storage Explorer arrives to the service in the form of a REST operation. This is also true for a custom application that leverages an Azure Storage Client library. ++To determine the price of each operation, you must first determine how that operation is classified in terms of its _type_. That's because the pricing pages list prices only by operation type and not by each individual operation. Use the tables in this article as a guide. ++## Operation type of each Blob Storage REST operation ++The following table maps each Blob Storage REST operation to an operation type. ++The price of each type appears in the [Azure Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/) page. ++| Operation | Premium block blob | Standard general-purpose v2 | Standard general-purpose v1 | +|-||--|--| +| [List Containers](/rest/api/storageservices/list-containers2) | List and create container | List and create container | List and create container | +| [Set Blob Service Properties](/rest/api/storageservices/set-blob-service-properties) | Other | Other | Write | +| [Get Blob Service Properties](/rest/api/storageservices/get-blob-service-properties) | Other | Other | Read | +| [Preflight Blob Request](/rest/api/storageservices/preflight-blob-request) | Other | Other | Read | +| [Get Blob Service Stats](/rest/api/storageservices/get-blob-service-stats) | Other | Other | Read | +| [Get Account Information](/rest/api/storageservices/get-account-information) | Other | Other | Read | +| [Get User Delegation Key](/rest/api/storageservices/get-user-delegation-key) | Other | Other | Read | +| [Create Container](/rest/api/storageservices/create-container) | List and create container | List and create container | List and create container | +| [Get Container Properties](/rest/api/storageservices/get-container-properties) | Other | Other | Read | +| [Get Container Metadata](/rest/api/storageservices/get-container-metadata) | Other | Other | Read | +| [Set Container Metadata](/rest/api/storageservices/set-container-metadata) | Other | Other | Write | +| [Get Container ACL](/rest/api/storageservices/get-container-acl) | Other | Other | Read | +| [Set Container ACL](/rest/api/storageservices/set-container-acl) | Other | Other | Write | +| [Delete Container](/rest/api/storageservices/delete-container) | Free | Free | Free | +| [Lease Container](/rest/api/storageservices/lease-container) (acquire, release, renew) | Other | Other | Read | +| [Lease Container](/rest/api/storageservices/lease-container) (break, change) | Other | Other | Write | +| [Restore Container](/rest/api/storageservices/restore-container) | List and create container | List and create container | List and create container | +| [List Blobs](/rest/api/storageservices/list-blobs) | List and create container | List and create container | List and create container | +| [Find Blobs by Tags in Container](/rest/api/storageservices/find-blobs-by-tags-container) | List and create container | List and create container | List and create container | +| [Put Blob](/rest/api/storageservices/put-blob) | Write | Write | Write | +| [Put Blob from URL](/rest/api/storageservices/put-blob-from-url) | Write | Write | Write | +| [Get Blob](/rest/api/storageservices/get-blob) | Read | Read | Read | +| [Get Blob Properties](/rest/api/storageservices/get-blob-properties) | Other | Other | Read | +| [Set Blob Properties](/rest/api/storageservices/set-blob-properties) | Other | Other | Write | +| [Get Blob Metadata](/rest/api/storageservices/get-blob-metadata) | Other | Other | Read | +| [Set Blob Metadata](/rest/api/storageservices/set-blob-metadata) | Other | Other | Write | +| [Get Blob Tags](/rest/api/storageservices/get-blob-tags) | Other | Other | Read | +| [Set Blob Tags](/rest/api/storageservices/set-blob-tags) | Other | Other | Write | +| [Find Blobs by Tags](/rest/api/storageservices/find-blobs-by-tags) | List and create container | List and create container | List and create container | +| [Lease Blob](/rest/api/storageservices/find-blobs-by-tags) (acquire, release, renew) | Other | Other | Read | +| [Lease Blob](/rest/api/storageservices/find-blobs-by-tags) (break, change) | Other | Other | Write | +| [Snapshot Blob](/rest/api/storageservices/snapshot-blob) | Other | Other | Read | +| [Copy Blob](/rest/api/storageservices/copy-blob) | Write | Write | Write | +| [Copy Blob from URL](/rest/api/storageservices/copy-blob-from-url) | Write | Write | Write | +| [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob) | Other | Other | Write | +| [Delete Blob](/rest/api/storageservices/delete-blob) | Free | Free | Free | +| [Undelete Blob](/rest/api/storageservices/undelete-blob) | Write | Write | Write | +| [Set Blob Tier](/rest/api/storageservices/set-blob-tier) (tier down) | Write | Write | Write | +| [Set Blob Tier](/rest/api/storageservices/set-blob-tier) (tier up) | Read | Read | Read | +| [Blob Batch](/rest/api/storageservices/blob-batch) (Set Blob Tier) | Other | Other | Other | +| [Set Immutability Policy](/rest/api/storageservices/set-blob-immutability-policy) | Other | Other | Other | +| [Delete Immutability Policy](/rest/api/storageservices/delete-blob-immutability-policy) | Other | Other | Other | +| [Set Legal Hold](/rest/api/storageservices/set-blob-legal-hold) | Other | Other | Other | +| [Put Block](/rest/api/storageservices/put-block-list) | Write | Write | Write | +| [Put Block from URL](/rest/api/storageservices/put-block-from-url) | Write | Write | Write | +| [Put Block List](/rest/api/storageservices/put-block-list) | Write | Write | Write | +| [Get Block List](/rest/api/storageservices/get-block-list) | Other | Other | Read | +| [Query Blob Contents](/rest/api/storageservices/query-blob-contents) | Read<sup>1</sup> | Read<sup>1</sup> | N/A | +| [Incremental Copy Blob](/rest/api/storageservices/incremental-copy-blob) | Other | Other | Write | +| [Append Block](/rest/api/storageservices/append-block) | Write | Write | Write | +| [Append Block from URL](/rest/api/storageservices/append-block-from-url) | Write | Write | Write | +| [Append Blob Seal](/rest/api/storageservices/append-blob-seal) | Write | Write | Write | +| [Set Blob Expiry](/rest/api/storageservices/set-blob-expiry) | Other | Other | Write | ++<sup>1</sup> In addition to a read charge, charges are incurred for the **Query Acceleration - Data Scanned**, and **Query Acceleration - Data Returned** transaction categories that appear on the [Azure Data Lake Storage pricing](https://azure.microsoft.com/pricing/details/storage/data-lake/) page. ++## Operation type of each Data Lake Storage Gen2 REST operation ++The following table maps each Data Lake Storage Gen2 REST operation to an operation type. ++The price of each type appears in the [Azure Data Lake Storage pricing](https://azure.microsoft.com/pricing/details/storage/data-lake/) page. ++| Operation | Premium block blob | Standard general-purpose v2 | +|--|--|--| +| [Filesystem - Create](/rest/api/storageservices/datalakestoragegen2/filesystem/create) | Write | Write | +| [Filesystem - Delete](/rest/api/storageservices/datalakestoragegen2/filesystem/delete) | Free | Free | +| [Filesystem - Get Properties](/rest/api/storageservices/datalakestoragegen2/filesystem/get-properties) | Other | Other | +| [Filesystem - List](/rest/api/storageservices/datalakestoragegen2/filesystem/list) | Iterative Read | Iterative Read | +| [Filesystem - Set Properties](/rest/api/storageservices/datalakestoragegen2/filesystem/set-properties) | Write | Write | +| [Path - Create](/rest/api/storageservices/datalakestoragegen2/path/create) | Write | Write | +| [Path - Delete](/rest/api/storageservices/datalakestoragegen2/path/delete) | Free | Free | +| [Path - Get Properties](/rest/api/storageservices/datalakestoragegen2/path/get-properties) | Read | Read | +| [Path - Lease](/rest/api/storageservices/datalakestoragegen2/path/lease) | Other | Other | +| [Path - List](/rest/api/storageservices/datalakestoragegen2/path/list) | Iterative Read | Iterative Read | +| [Path - Read](/rest/api/storageservices/datalakestoragegen2/path/read) | Read | Read | +| [Path - Update](/rest/api/storageservices/datalakestoragegen2/path/update) | Write | Write | ++## See also ++- [Plan and manage costs for Azure Blob Storage](../common/storage-plan-manage-costs.md) |
storage | Storage Retry Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-retry-policy.md | -# Implement a retry policy with the Azure Storage client library for .NET +# Implement a retry policy with .NET Any application that runs in the cloud or communicates with remote services and resources must be able to handle transient faults. It's common for these applications to experience faults due to a momentary loss of network connectivity, a request timeout when a service or resource is busy, or other factors. Developers should build applications to handle transient faults transparently to improve stability and resiliency. The following table lists the properties of the [RetryOptions](/dotnet/api/azure | Property | Type | Description | Default value | | | | | |-| [Delay](/dotnet/api/azure.core.retryoptions.delay) | [TimeSpan](/dotnet/api/system.timespan) | The delay between retry attempts for a fixed approach or the delay on which to base calculations for a backoff-based approach. If the service provides a Retry-After response header, the next retry will be delayed by the duration specified by the header value. | 0.8 second | -| [MaxDelay](/dotnet/api/azure.core.retryoptions.maxdelay) | [TimeSpan](/dotnet/api/system.timespan) | The maximum permissible delay between retry attempts when the service doesn't provide a Retry-After response header. If the service provides a Retry-After response header, the next retry will be delayed by the duration specified by the header value. | 1 minute | +| [Delay](/dotnet/api/azure.core.retryoptions.delay) | [TimeSpan](/dotnet/api/system.timespan) | The delay between retry attempts for a fixed approach or the delay on which to base calculations for a backoff-based approach. If the service provides a Retry-After response header, the next retry is delayed by the duration specified by the header value. | 0.8 second | +| [MaxDelay](/dotnet/api/azure.core.retryoptions.maxdelay) | [TimeSpan](/dotnet/api/system.timespan) | The maximum permissible delay between retry attempts when the service doesn't provide a Retry-After response header. If the service provides a Retry-After response header, the next retry is delayed by the duration specified by the header value. | 1 minute | | [MaxRetries](/dotnet/api/azure.core.retryoptions.maxretries) | int | The maximum number of retry attempts before giving up. | 5 | | [Mode](/dotnet/api/azure.core.retryoptions.mode) | [RetryMode](/dotnet/api/azure.core.retrymode) | The approach to use for calculating retry delays. | Exponential | | [NetworkTimeout](/dotnet/api/azure.core.retryoptions.networktimeout) | [TimeSpan](/dotnet/api/system.timespan) | The timeout applied to an individual network operation. | 100 seconds | -In this code example for blob storage, we'll configure the retry options in the `Retry` property of the [BlobClientOptions](/dotnet/api/azure.storage.blobs.blobclientoptions) class. Then, we'll create a client object for the blob service using the retry options. +In this code example for blob storage, we configure the retry options in the `Retry` property of the [BlobClientOptions](/dotnet/api/azure.storage.blobs.blobclientoptions) class. Then, we create a client object for the blob service using the retry options. :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Retry.cs" id="Snippet_RetryOptions"::: -In this example, each service request issued from the `BlobServiceClient` object will use the retry options as defined in the `BlobClientOptions` object. You can configure various retry strategies for service clients based on the needs of your app. +In this example, each service request issued from the `BlobServiceClient` object uses the retry options as defined in the `BlobClientOptions` object. You can configure various retry strategies for service clients based on the needs of your app. ## Use geo-redundancy to improve app resiliency If your app requires high availability and greater resiliency against failures, you can leverage Azure Storage geo-redundancy options as part of your retry policy. Storage accounts configured for geo-redundant replication are synchronously replicated in the primary region, and asynchronously replicated to a secondary region that is hundreds of miles away. Azure Storage offers two options for geo-redundant replication: [Geo-redundant storage (GRS)](../common/storage-redundancy.md#geo-redundant-storage) and [Geo-zone-redundant storage (GZRS)](../common/storage-redundancy.md#geo-zone-redundant-storage). In addition to enabling geo-redundancy for your storage account, you also need to configure read access to the data in the secondary region. To learn how to change replication options for your storage account, see [Change how a storage account is replicated](../common/redundancy-migration.md). -In this example, we set the `GeoRedundantSecondaryUri` property in `BlobClientOptions`. When this property is set and a read request failure occurs in the primary region, the app will seamlessly switch to perform retries against the secondary region endpoint. +In this example, we set the [GeoRedundantSecondaryUri](/dotnet/api/azure.storage.blobs.blobclientoptions.georedundantsecondaryuri#azure-storage-blobs-blobclientoptions-georedundantsecondaryuri) property in `BlobClientOptions`. If this property is set, the secondary URI is used for `GET` or `HEAD` requests during retries. If the status of the response from the secondary URI is a 404, then subsequent retries for the request don't use the secondary URI again, as this status code indicates that the resource may not have propagated there yet. Otherwise, subsequent retries alternate back and forth between primary and secondary URI. Apps that make use of geo-redundancy need to keep in mind some specific design considerations. To learn more, see [Use geo-redundancy to design highly available applications](../common/geo-redundant-design.md). |
storage | Storage Plan Manage Costs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-plan-manage-costs.md | The correct pricing page for these requests is the [Azure Data Lake Storage Gen2 If your account does not have the hierarchical namespace feature enabled, but you expect clients, workloads, or applications to make requests over the Data Lake Storage endpoint of your account, then set the **File Structure** drop-down list to **Flat Namespace**. Otherwise, make sure that it is set to **Hierarchical Namespace**. +#### Find the price of each operation ++Each request made by tools such as AzCopy or Azure Storage Explorer arrives to the service in the form of a REST operation. This is also true for a custom application that leverages an Azure Storage Client library. ++To determine the price of each operation, you must first determine how that operation is classified in terms of its _type_. That's because the pricing pages list prices only by operation type and not by each individual operation. To see how each operation maps to an operation type, see [Map each REST operation to a price](../blobs/map-rest-apis-transaction-categories.md). + ### Using Azure Prepayment with Azure Blob Storage You can pay for Azure Blob Storage charges with your Azure Prepayment (previously called monetary commitment) credit. However, you can't use Azure Prepayment credit to pay for charges for third party products and services including those from the Azure Marketplace. |
storage | Install Container Storage Aks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/install-container-storage-aks.md | description: Learn how to install Azure Container Storage Preview for use with A Previously updated : 09/19/2023 Last updated : 09/26/2023 -> If you already have an AKS cluster deployed, you can install Azure Container Storage Preview using an installation script instead of following the manual steps in this article. See [Quickstart: Use Azure Container Storage Preview with Azure Kubernetes Service](container-storage-aks-quickstart.md). +> If you already have an AKS cluster deployed, you can proceed to [Connect to the cluster](#connect-to-the-cluster). Alternatively, you can install Azure Container Storage Preview [using an automated installation script](container-storage-aks-quickstart.md) instead of following the manual steps outlined in this article. ## Getting started To connect to the cluster, use the Kubernetes command-line client, `kubectl`. Next, you must update your node pool label to associate the node pool with the correct IO engine for Azure Container Storage. -Run the following command to update the label. Remember to replace `<resource-group>` and `<cluster-name>` with your own values, and replace `<nodepool-name>` with the name of your node pool from the previous step. +> [!IMPORTANT] +> **If you created your AKS cluster using the Azure portal:** The cluster will likely have a user node pool and a system/agent node pool. Before you can install Azure Container Storage, you must update the user node pool label as described in this section. However, if your cluster consists of only a system node pool, which is the case with test/dev clusters created with the Azure portal, you'll need to first [add a new user node pool](../../aks/create-node-pools.md#add-a-node-pool) and then label it. This is because when you create an AKS cluster using the Azure portal, a taint `CriticalAddOnsOnly` is added to the agent/system nodepool, which blocks installation of Azure Container Storage on the system node pool. This taint isn't added when an AKS cluster is created using Azure CLI. ++Run the following command to update the node pool label. Remember to replace `<resource-group>` and `<cluster-name>` with your own values, and replace `<nodepool-name>` with the name of your node pool. ```azurecli-interactive az aks nodepool update --resource-group <resource group> --cluster-name <cluster name> --name <nodepool name> --labels acstor.azure.com/io-engine=acstor ``` -> [!TIP] -> You can verify that the node pool is correctly labeled by signing into the [Azure portal](https://portal.azure.com?azure-portal=true) and navigating to your AKS cluster. Go to **Settings > Node pools**, select your node pool, and under **Taints and labels** you should see `Labels: acstor.azure.com/io-engine:acstor`. +You can verify that the node pool is correctly labeled by signing into the [Azure portal](https://portal.azure.com?azure-portal=true) and navigating to your AKS cluster. Go to **Settings > Node pools**, select your node pool, and under **Taints and labels** you should see `Labels: acstor.azure.com/io-engine:acstor`. ## Assign Contributor role to AKS managed identity |
storage | Nfs Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/nfs-performance.md | description: Learn ways to improve the performance of NFS Azure file shares at s Previously updated : 09/25/2023 Last updated : 09/26/2023 To change this value, set the read-ahead size by adding a rule in udev, a Linux ```output SUBSYSTEM=="bdi" \ , ACTION=="add" \- , PROGRAM="<absolute_path>/awk -v bdi=$kernel 'BEGIN{ret=1} {if ($4 == bdi) {ret=0}} END{exit ret}' /proc/fs/nfsfs/volumes" \ + , PROGRAM="/usr/bin/awk -v bdi=$kernel 'BEGIN{ret=1} {if ($4 == bdi) {ret=0}} END{exit ret}' /proc/fs/nfsfs/volumes" \ , ATTR{read_ahead_kb}="15360" ``` -1. In a console, apply the udev rule by running the [udevadm](https://www.man7.org/linux/man-pages/man8/udevadm.8.html) command as a superuser: +1. In a console, apply the udev rule by running the [udevadm](https://www.man7.org/linux/man-pages/man8/udevadm.8.html) command as a superuser and reloading the rules files and other databases. You only need to run this command once, to make udev aware of the new file. ```bash sudo udevadm control --reload |
storage | Redundancy Premium File Shares | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/redundancy-premium-file-shares.md | |
stream-analytics | Kafka Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/kafka-output.md | + + Title: Stream data from Azure Stream Analytics into Kafka +description: Learn about setting up Azure Stream Analytics as a producer to kafka ++++ Last updated : 09/26/2023+++# Kafka output from Azure Stream Analytics (Preview) ++Azure Stream Analytics allows you to connect directly to Kafka clusters as a producer to output data. The solution is low code and entirely managed by the Azure Stream Analytics team at Microsoft, allowing it to meet business compliance standards. The Kafka Adapters are backward compatible and support all versions with the latest client release starting from version 0.10. Users can connect to Kafka clusters inside a VNET and Kafka clusters with a public endpoint, depending on the configurations. The configuration relies on existing Kafka configuration conventions. +Supported compression types are None, Gzip, Snappy, LZ4, and Zstd. ++## Authentication and encryption ++You can use four types of security protocols to connect to your Kafka clusters: ++|Property name |Description | +|-|--| +|mTLS |encryption and authentication | +|SASL_SSL |It combines two different security mechanisms - SASL (Simple Authentication and Security Layer) and SSL (Secure Sockets Layer) - to ensure both authentication and encryption are in place for data transmission. | +|Kafka topic |A unit of your Kafka cluster you want to write events to. | +|SASL_PLAINTEXT |standard authentication with username and password without encryption | +|None |The serialization format (JSON, CSV, Avro, Parquet) of the incoming data stream. | +++> [!IMPORTANT] +> Confluent Cloud supports authenticating using API Keys, OAuth, or SAML single sign-on (SSO). Azure Stream Analytics does not currently support these authentication options. +> +++### Key vault integration ++> [!NOTE] +> When using trust store certificates with mTLS or SASL_SSL security protocols, you must have Azure Key vault and managed identity configured for your Azure Stream Analytics job. +> +Azure Stream Analytics integrates seamlessly with Azure Key vault to access stored secrets needed for authentication and encryption when using mTLS or SASL_SSL security protocols. Your Azure Stream Analytics job connects to Azure Key vault using managed identity to ensure a secure connection and avoid the exfiltration of secrets. ++You can store the certificates as Key vault certificates or Key vault secrets. Private keys are in PEM format. ++### VNET integration +When configuring your Azure Stream Analytics job to connect to your Kafka clusters, depending on your configuration, you may have to configure your job to access your Kafka clusters, which are behind a firewall or inside a virtual network. You can visit the Azure Stream Analytics VNET documentation to learn more about configuring private endpoints to access resources inside a virtual network or behind a firewall. +++### Configuration +The following table lists the property names and their description for creating a Kafka output: + +| Property name | Description | +||-| +| Input/Output Alias | A friendly name used in queries to reference your input or output | +| Bootstrap server addresses | A list of host/port pairs to establish the connection to the Kafka cluster. | +| Kafka topic | A unit of your Kafka cluster you want to write events to. | +| Security Protocol | How you want to connect to your Kafka cluster. Azure Stream Analytics supports mTLS, SASL_SSL, SASL_PLAINTEXT or None. | +| Event Serialization format | The serialization format (JSON, CSV, Avro) of the outgoing data stream. | +| Partition key | Azure Stream Analytics assigns partitions using round partitioning. | +| Kafka event compression type | The compression type used for outgoing data streams, such as Gzip, Snappy, Lz4, Zstd, or None. | ++### Limitations +* When configuring your Azure Stream Analytics jobs to use VNET/SWIFT, your job must be configured with at least six (6) streaming units. +* When using mTLS or SASL_SSL with Azure Key vault, you must convert your Java Key Store to PEM format. +* The minimum version of Kafka you can configure Azure Stream Analytics to connect to is version 0.10. ++> [!NOTE] +> For direct help with using the Azure Stream Analytics Kafka adapter, please reach out to [askasa@microsoft.com](mailto:askasa@microsoft.com). +> ++## Next steps +> [!div class="nextstepaction"] +> [Quickstart: Create a Stream Analytics job by using the Azure portal](stream-analytics-quick-create-portal.md) ++<!--Link references--> +[stream.analytics.developer.guide]: ../stream-analytics-developer-guide.md +[stream.analytics.scale.jobs]: stream-analytics-scale-jobs.md +[stream.analytics.introduction]: stream-analytics-introduction.md +[stream.analytics.get.started]: stream-analytics-real-time-fraud-detection.md +[stream.analytics.query.language.reference]: /stream-analytics-query/stream-analytics-query-language-reference |
stream-analytics | Stream Analytics Define Kafka Input | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-define-kafka-input.md | + + Title: Stream data from Kafka into Azure Stream Analytics +description: Learn about setting up Azure Stream Analytics as a consumer from Kafka ++++ Last updated : 09/26/2023+++# Stream data from Kafka into Azure Stream Analytics (Preview) ++Kafka is a distributed streaming platform used to publish and subscribe to streams of records. Kafka is designed to allow your apps to process records as they occur. It is an open-source system developed by the Apache Software Foundation and written in Java and Scala. ++The following are the major use cases: +* Messaging +* Website Activity Tracking +* Metrics +* Log Aggregation +* Stream Processing ++Azure Stream Analytics lets you connect directly to Kafka clusters to ingest data. The solution is low code and entirely managed by the Azure Stream Analytics team at Microsoft, allowing it to meet business compliance standards. The Kafka Adapters are backward compatible and support all versions with the latest client release starting from version 0.10. Users can connect to Kafka clusters inside a VNET and Kafka clusters with a public endpoint, depending on the configurations. The configuration relies on existing Kafka configuration conventions. Supported compression types are None, Gzip, Snappy, LZ4, and Zstd. ++## Authentication and encryption ++You can use four types of security protocols to connect to your Kafka clusters: ++|Property name |Description | +|-|--| +|mTLS |encryption and authentication | +|SASL_SSL |It combines two different security mechanisms - SASL (Simple Authentication and Security Layer) and SSL (Secure Sockets Layer) - to ensure both authentication and encryption are in place for data transmission. | +|Kafka topic |A unit of your Kafka cluster you want to write events to. | +|SASL_PLAINTEXT |standard authentication with username and password without encryption | +|None |The serialization format (JSON, CSV, Avro, Parquet) of the incoming data stream. | +++> [!IMPORTANT] +> Confluent Cloud supports authenticating using API Keys, OAuth, or SAML single sign-on (SSO). Azure Stream Analytics does not currently support these authentication options. +++### Key vault integration ++> [!NOTE] +> When using trust store certificates with mTLS or SASL_SSL security protocols, you must have Azure Key vault and managed identity configured for your Azure Stream Analytics job. +> ++Azure Stream Analytics integrates seamlessly with Azure Key vault to access stored secrets needed for authentication and encryption when using mTLS or SASL_SSL security protocols. Your Azure Stream Analytics job connects to Azure Key vault using managed identity to ensure a secure connection and avoid the exfiltration of secrets. ++You can store the certificates as Key vault certificates or Key vault secrets. Private keys are in PEM format. ++### VNET integration +When configuring your Azure Stream Analytics job to connect to your Kafka clusters, depending on your configuration, you may have to configure your job to access your Kafka clusters, which are behind a firewall or inside a virtual network. You can visit the Azure Stream Analytics VNET documentation to learn more about configuring private endpoints to access resources inside a virtual network or behind a firewall. +++### Configuration +The following table lists the property names and their description for creating a Kafka Input: ++| Property name | Description | +||-| +| Input/Output Alias | A friendly name used in queries to reference your input or output | +| Bootstrap server addresses | A list of host/port pairs to establish the connection to the Kafka cluster. | +| Kafka topic | A unit of your Kafka cluster you want to write events to. | +| Security Protocol | How you want to connect to your Kafka cluster. Azure Stream Analytics supports mTLS, SASL_SSL, SASL_PLAINTEXT or None. | +| Event Serialization format | The serialization format (JSON, CSV, Avro, Parquet, Protobuf) of the incoming data stream. | ++++### Limitations +* When configuring your Azure Stream Analytics jobs to use VNET/SWIFT, your job must be configured with at least six (6) streaming units. +* When using mTLS or SASL_SSL with Azure Key vault, you must convert your Java Key Store to PEM format. +* The minimum version of Kafka you can configure Azure Stream Analytics to connect to is version 0.10. ++> [!NOTE] +> For direct help with using the Azure Stream Analytics Kafka adapter, please reach out to [askasa@microsoft.com](mailto:askasa@microsoft.com). +> +++## Next steps +> [!div class="nextstepaction"] +> [Quickstart: Create a Stream Analytics job by using the Azure portal](stream-analytics-quick-create-portal.md) ++<!--Link references--> +[stream.analytics.developer.guide]: ../stream-analytics-developer-guide.md +[stream.analytics.scale.jobs]: stream-analytics-scale-jobs.md +[stream.analytics.introduction]: stream-analytics-introduction.md +[stream.analytics.get.started]: stream-analytics-real-time-fraud-detection.md +[stream.analytics.query.language.reference]: /stream-analytics-query/stream-analytics-query-language-reference |
synapse-analytics | Get Started Ssms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/get-started-ssms.md | You can use [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server- [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio) is fully supported starting from version 1.18.0. SSMS is partially supported starting from version 18.5, you can use it to connect and query only. -> [!NOTE] -> If AAD login has connection open for more than 1 hour at time of query execution, any query that relies on AAD will fail. This includes querying storage using AAD pass-through and statements that interact with AAD (like CREATE EXTERNAL PROVIDER). This affects every tool that keeps connection open, like in query editor in SSMS and ADS. Tools that open new connection to execute query are not affected, like Synapse Studio. -> You can restart SSMS or connect and disconnect in ADS to mitigate this issue. -. ## Prerequisites Before you begin, make sure you have the following prerequisites: |
synapse-analytics | Resources Self Help Sql On Demand | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md | This error can occur when the authentication method is user identity, which is a The error message might also resemble: `File {path} cannot be opened because it does not exist or it is used by another process.` -- If an Azure AD user has a connection open for more than one hour during query execution, any query that relies on Azure AD fails. This scenario includes queries that access storage by using Azure AD pass-through authentication and statements that interact with Azure AD like CREATE EXTERNAL PROVIDER. This issue frequently affects tools that keep connections open, like in the query editor in SQL Server Management Studio and Azure Data Studio. Tools that open new connections to execute a query, like Synapse Studio, aren't affected. - The Azure AD authentication token might be cached by the client applications. For example, Power BI caches the Azure AD token and reuses the same token for one hour. The long-running queries might fail if the token expires during execution. Consider the following mitigations: - Restart the client application to obtain a new Azure AD token.-- Consider switching to:- - [Service principal](develop-storage-files-storage-access-control.md?tabs=service-principal#supported-storage-authorization-types) - - [Managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity#supported-storage-authorization-types) - - [Shared access signature](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#supported-storage-authorization-types) #### [0x80070008](#tab/x80070008) This error can occur when the authentication method is user identity, which is a The error message might also resemble the following pattern: `File {path} cannot be opened because it does not exist or it is used by another process.` -- If an Azure AD user has a connection open for more than one hour during query execution, any query that relies on Azure AD fails, including queries that access storage by using Azure AD pass-through authentication and statements that interact with Azure AD like CREATE EXTERNAL PROVIDER. This issue frequently affects tools that keep connections open, like the query editor in SQL Server Management Studio and Azure Data Studio. Client tools that open new connections to execute a query, like Synapse Studio, aren't affected. - The Azure AD authentication token might be cached by the client applications. For example, Power BI caches an Azure AD token and reuses it for one hour. The long-running queries might fail if the token expires in the middle of execution. Consider the following mitigations to resolve the issue: - Restart the client application to obtain a new Azure AD token.-- Consider switching to:- - [Service principal](develop-storage-files-storage-access-control.md?tabs=service-principal#supported-storage-authorization-types) - - [Managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity#supported-storage-authorization-types) - - [Shared access signature](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#supported-storage-authorization-types) #### [0x80072EE7](#tab/x80072EE7) |
update-center | Guidance Migration Automation Update Management Azure Update Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/guidance-migration-automation-update-management-azure-update-manager.md | Guidance to move various capabilities is provided in table below: | | | | | | 1 | Patch management for Off-Azure machines. | Could run with or without Arc connectivity. | Azure Arc is a prerequisite for non-Azure machines. | 1. [Create service principal](../app-service/quickstart-php.md#1get-the-sample-repository) </br> 2. [Generate installation script](../azure-arc/servers/onboard-service-principal.md#generate-the-installation-script-from-the-azure-portal) </br> 3. [Install agent and connect to Azure](../azure-arc/servers/onboard-service-principal.md#install-the-agent-and-connect-to-azure) | 1. [Create service principal](../azure-arc/servers/onboard-service-principal.md#azure-powershell) <br> 2. [Generate installation script](../azure-arc/servers/onboard-service-principal.md#generate-the-installation-script-from-the-azure-portal) </br> 3. [Install agent and connect to Azure](../azure-arc/servers/onboard-service-principal.md#install-the-agent-and-connect-to-azure) | 2 | Enable periodic assessment to check for latest updates automatically every few hours. | Machines automatically receive the latest updates every 12 hours for Windows and every 3 hours for Linux. | Periodic assessment is an update setting on your machine. If it's turned on, the Update Manager fetches updates every 24 hours for the machine and shows the latest update status. | 1. [Single machine](manage-update-settings.md#configure-settings-on-a-single-vm) </br> 2. [At scale](manage-update-settings.md#configure-settings-at-scale) </br> 3. [At scale using policy](periodic-assessment-at-scale.md) | 1. [For Azure VM](../virtual-machines/automatic-vm-guest-patching.md#azure-powershell-when-updating-a-windows-vm) </br> 2.[For Arc-enabled VM](/powershell/module/az.connectedmachine/update-azconnectedmachine?view=azps-10.2.0) |-3 | Static Update deployment schedules (Static list of machines for update deployment). | Automation Update management had its own schedules. | Azure Update Manager creates a [maintenance configuration](../virtual-machines/maintenance-configurations.md) object for a schedule. So, you need to create this object, copying all schedule settings from Automation Update Management to Azure Update Manager schedule. | 1. [Single VM](scheduled-patching.md#schedule-recurring-updates-on-single-vm) </br> 2. [At scale](scheduled-patching.md#schedule-recurring-updates-at-scale) </br> 3. [At scale using policy](scheduled-patching.md#onboarding-to-schedule-using-policy) | [Create a static scope](manage-vms-programmatically.md) | +3 | Static Update deployment schedules (Static list of machines for update deployment). | Automation Update management had its own schedules. | Azure Update Manager creates a [maintenance configuration](../virtual-machines/maintenance-configurations.md) object for a schedule. So, you need to create this object, copying all schedule settings from Automation Update Management to Azure Update Manager schedule. | 1. [Single VM](scheduled-patching.md#schedule-recurring-updates-on-a-single-vm) </br> 2. [At scale](scheduled-patching.md#schedule-recurring-updates-at-scale) </br> 3. [At scale using policy](scheduled-patching.md#onboard-to-schedule-by-using-azure-policy) | [Create a static scope](manage-vms-programmatically.md) | 4 | Dynamic Update deployment schedules (Defining scope of machines using resource group, tags, etc. which is evaluated dynamically at runtime).| Same as static update schedules. | Same as static update schedules. | [Add a dynamic scope](manage-dynamic-scoping.md#add-a-dynamic-scope | [Create a dynamic scope]( tutorial-dynamic-grouping-for-scheduled-patching.md#create-a-dynamic-scope) | 5 | Deboard from Azure Automation Update management. | After you complete the steps 1, 2, and 3, you need to clean up Azure Update management objects. | | 1. [Remove machines from solution](../automation/update-management/remove-feature.md#remove-management-of-vms) </br> 2. [Remove Update Management solution](../automation/update-management/remove-feature.md#remove-updatemanagement-solution) </br> 3. [Unlink workspace from Automation account](../automation/update-management/remove-feature.md#unlink-workspace-from-automation-account) </br> 4. [Cleanup Automation account](../automation/update-management/remove-feature.md#cleanup-automation-account) | NA | 6 | Reporting | Custom update reports using Log Analytics queries. | Update data is stored in Azure Resource Graph (ARG). Customers can query ARG data to build custom dashboards, workbooks etc. | The old Automation Update Management data stored in Log analytics can be accessed, but there's no provision to move data to ARG. You can write ARG queries to access data that will be stored to ARG after virtual machines are patched via Azure Update Manager. With ARG queries you can, build dashboards and workbooks using following instructions: </br> 1. [Log structure of Azure Resource graph updates data](query-logs.md) </br> 2. [Sample ARG queries](sample-query-logs.md) </br> 3. [Create workbooks](manage-workbooks.md) | NA | |
update-center | Prerequsite For Schedule Patching | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/prerequsite-for-schedule-patching.md | This article is an overview on how to configure schedule patching and automatic Currently, you can enable [automatic guest VM patching](../virtual-machines/automatic-vm-guest-patching.md) (autopatch) by setting the patch mode to **Azure-orchestrated** in the Azure portal or **AutomaticByPlatform** in the REST API, where patches are automatically applied during off-peak hours. -For customizing control over your patch installation, you can use [schedule patching](updates-maintenance-schedules.md#scheduled-patching) to define your maintenance window. You can [enable schedule patching](scheduled-patching.md#schedule-recurring-updates-on-single-vm) by setting the patch mode to **Azure orchestrated** in the Azure portal or **AutomaticByPlatform** in the REST API and attaching a schedule to the Azure VM. So, the VM properties couldn't be differentiated between **schedule patching** or **Automatic guest VM patching** because both had the patch mode set to **Azure-Orchestrated**. +For customizing control over your patch installation, you can use [schedule patching](updates-maintenance-schedules.md#scheduled-patching) to define your maintenance window. You can [enable schedule patching](scheduled-patching.md#schedule-recurring-updates-on-a-single-vm) by setting the patch mode to **Azure-orchestrated** in the Azure portal or **AutomaticByPlatform** in the REST API and attaching a schedule to the Azure VM. So, the VM properties couldn't be differentiated between **schedule patching** or **Automatic guest VM patching** because both had the patch mode set to **Azure-orchestrated**. In some instances, when you remove the schedule from a VM, there's a possibility that the VM might be autopatched and rebooted. To overcome the limitations, we've introduced a new prerequisite, `ByPassPlatformSafetyChecksOnUserSchedule`, which can now be set to `true` to identify a VM by using schedule patching. It means that VMs with this property set to `true` are no longer autopatched when the VMs don't have an associated maintenance configuration. |
update-center | Quickstart On Demand | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/quickstart-on-demand.md | Title: Quickstart - deploy updates in using update manager in the Azure portal -description: This quickstart helps you to deploy updates immediately and view results for supported machines in Azure Update Manager using the Azure portal. + Title: 'Quickstart: Deploy updates by using Update Manager in the Azure portal' +description: This quickstart helps you to deploy updates immediately and view results for supported machines in Azure Update Manager by using the Azure portal. Last updated 09/18/2023 -Using the Update Manager you can update automatically at scale with the help of built-in policies and schedule updates on a recurring basis or you can also take control by checking and installing updates manually. +By using Azure Update Manager, you can update automatically at scale with the help of built-in policies and schedule updates on a recurring basis. You can also take control by checking and installing updates manually. -This quickstart details you how to perform manual assessment and apply updates on a selected Azure virtual machine(s) or Arc-enabled server on-premises or in cloud environments. +This quickstart explains how to perform manual assessment and apply updates on selected Azure virtual machines (VMs) or an Azure Arc-enabled server on-premises or in cloud environments. ## Prerequisites - An Azure account with an active subscription. If you don't have one yet, sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Your role must be either an [Owner](../role-based-access-control/built-in-roles.md#owner) or [Contributor](../role-based-access-control/built-in-roles.md#contributor) for Azure VM and resource administrator for Arc enabled servers.+- Your role must be either an [Owner](../role-based-access-control/built-in-roles.md#owner) or [Contributor](../role-based-access-control/built-in-roles.md#contributor) for an Azure VM and resource administrator for Azure Arc-enabled servers. - Ensure that the target machines meet the specific operating system requirements of the Windows Server and Linux. For more information, see [Overview](overview.md). - ## Check updates -1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to **Azure Update Manager**. +1. Sign in to the [Azure portal](https://portal.azure.com) and go to **Azure Update Manager**. -1. SelectΓÇ»**Get started** > **On-demand assessment and updates**, selectΓÇ»**Check for updates**. +1. SelectΓÇ»**Get started** > **On-demand assessment and updates** >ΓÇ»**Check for updates**. - :::image type="content" source="./media/quickstart-on-demand/quickstart-check-updates.png" alt-text="Screenshot of accessing check for updates."::: + :::image type="content" source="./media/quickstart-on-demand/quickstart-check-updates.png" alt-text="Screenshot that shows accessing check for updates."::: - In the **Select resources and check for updates**, a table lists all the machines in the specific Azure subscription. + On the **Select resources and check for updates** pane, a table lists all the machines in the specific Azure subscription. 1. Select one or more machines from the list and select **Check for updates** to initiate a compliance scan. - When the assessment is complete, a confirmation message appears on the top right corner of the page. - - +After the assessment is finished, a confirmation message appears in the upper-right corner of the page. + ## Configure settings -For the assessed machines that are reporting updates, you can configure [periodic assessment](assessment-options.md#periodic-assessment) [hot patching](updates-maintenance-schedules.md#hot-patching),and [patch orchestration](manage-multiple-machines.md#summary-of-machine-status) either immediately or schedule the updates by defining the maintenance window. +For the assessed machines that are reporting updates, you can configure [periodic assessment](assessment-options.md#periodic-assessment), [hot patching](updates-maintenance-schedules.md#hot-patching),and [patch orchestration](manage-multiple-machines.md#summary-of-machine-status) either immediately or schedule the updates by defining the maintenance window. -To configure the settings on your machines, follow these steps: +To configure the settings on your machines: -1. In **Azure Update Manager | Getting started**, in **On-demand assessment and updates**, selectΓÇ»**Update settings**. +1. On the **Azure Update Manager | Get started** page, in **On-demand assessment and updates**, selectΓÇ»**Update settings**. - :::image type="content" source="./media/quickstart-on-demand/quickstart-update-settings.png" alt-text="Screenshot showing how to access update settings option to configure updates for virtual machines."::: + :::image type="content" source="./media/quickstart-on-demand/quickstart-update-settings.png" alt-text="Screenshot that shows how to access the Update settings option to configure updates for virtual machines."::: -1. In **Update setting(s) to change**, select any option ΓÇö*Periodic assessment*, *Hotpatch* and *Patch orchestration* to configure and select **Next**. For more information, see [Configure settings on virtual machines](manage-update-settings.md#configure-settings-on-a-single-vm). +1. On the **Update settings to change** page, select **Periodic assessment**, **Hotpatch**, or **Patch orchestration** to configure. Select **Next**. For more information, see [Configure settings on virtual machines](manage-update-settings.md#configure-settings-on-a-single-vm). - A notification appears to confirm that the update settings have been successfully applied. +1. On the **Review and change** tab, verify the resource selection and update settings and select **Review and change**. +A notification confirms that the update settings were successfully applied. ## Install updates -As per the last assessment performed on the selected machines, you can now select resources and machines to install the updates +Based on the last assessment performed on the selected machines, you can now select resources and machines to install the updates. -1. In the **Azure Update Manager | Getting started** page, in **On-demand assessment and updates**, selectΓÇ»**Install updates by machines**. +1. On the **Azure Update Manager | Get started** page, in **On-demand assessment and updates**, selectΓÇ»**Install updates by machines**. - :::image type="content" source="./media/quickstart-on-demand/quickstart-install-updates.png" alt-text="Screenshot showing how to access install update settings option to install the updates for virtual machines."::: + :::image type="content" source="./media/quickstart-on-demand/quickstart-install-updates.png" alt-text="Screenshot that shows how to access the Install update settings option to install the updates for virtual machines."::: -1. In the **Install one-time updates** page, select one or more machines from the list in the **Machines** tab and click **Next**. +1. On the **Install one-time updates** pane, select one or more machines from the list on the **Machines** tab. Select **Next**. -1. In **Updates**, specify the updates to include in the deployment and click **Next**: +1. On the **Updates** tab, specify the updates to include in the deployment and select **Next**: - - Include update classification - - Include KB ID/package - by specific KB IDs or package. For Windows, see [MSRC](https://msrc.microsoft.com/update-guide/deployments) for the latest KBs. - - Exclude KB ID/package that you don't want to install as part of the process. Updates not shown in the list can be installed based on the time between last assessment and release of new updates. + - Include update classification. + - Include the Knowledge Base (KB) ID/package, by specific KB IDs or package. For Windows, see the [Microsoft Security Response Center (MSRC)](https://msrc.microsoft.com/update-guide/deployments) for the latest information. + - Exclude the KB ID/package that you don't want to install as part of the process. Updates not shown in the list can be installed based on the time between last assessment and release of new updates. - Include by maximum patch publish date includes the updates published on or before a specific date. -1. In **Properties**, select the **Reboot option** and **Maintenance window** (in minutes) and click **Next**. +1. On the **Properties** tab, select **Reboot** and **Maintenance window** (in minutes). Select **Next**. -1. In **Review + install**, verify the update deployment options and select **Install**. +1. On the **Review + install** tab, verify the update deployment options and select **Install**. -A notification confirms that the installation of updates is in progress and after completion, you can view the results in the **Update Manager**, **History** page. +A notification confirms that the installation of updates is in progress. After the update is finished, you can view the results on the **Update Manager | History** page. ## Next steps - Learn about [managing multiple machines](manage-multiple-machines.md). +Learn about [managing multiple machines](manage-multiple-machines.md). |
update-center | Scheduled Patching | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/scheduled-patching.md | Title: Scheduling recurring updates in Azure Update Manager -description: The article details how to use Azure Update Manager in Azure to set update schedules that install recurring updates on your machines. +description: This article details how to use Azure Update Manager to set update schedules that install recurring updates on your machines. Last updated 09/18/2023 -# Schedule recurring updates for machines using Azure portal and Azure Policy +# Schedule recurring updates for machines by using the Azure portal and Azure Policy **Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. > [!IMPORTANT]-> - For a seamless scheduled patching experience, we recommend that for all Azure VMs, you update the patch orchestration to **Customer Managed Schedules** by **30th June 2023**. If you fail to update the patch orchestration by **30th June 2023**, you can experience a disruption in business continuity because the schedules will fail to patch the VMs.[Learn more](prerequsite-for-schedule-patching.md). +> For a seamless scheduled patching experience, we recommend that for all Azure virtual machines (VMs), you update the patch orchestration to **Customer Managed Schedules** by **June 30, 2023**. If you fail to update the patch orchestration by June 30, 2023, you can experience a disruption in business continuity because the schedules will fail to patch the VMs. [Learn more](prerequsite-for-schedule-patching.md). -You can use Update Manager in Azure to create and save recurring deployment schedules. You can create a schedule on a daily, weekly or hourly cadence, specify the machines that must be updated as part of the schedule, and the updates to be installed. This schedule will then automatically install the updates as per the created schedule for single VM and at scale. +You can use Azure Update Manager to create and save recurring deployment schedules. You can create a schedule on a daily, weekly, or hourly cadence. You can specify the machines that must be updated as part of the schedule and the updates to be installed. -Update Manager uses maintenance control schedule instead of creating its own schedules. Maintenance control enables customers to manage platform updates. For more information, see [Maintenance control documentation](/azure/virtual-machines/maintenance-control). +This schedule then automatically installs the updates according to the created schedule for a single VM and at scale. ++Update Manager uses a maintenance control schedule instead of creating its own schedules. Maintenance control enables customers to manage platform updates. For more information, see [Maintenance control documentation](/azure/virtual-machines/maintenance-control). ## Prerequisites for scheduled patching +1. See [Prerequisites for Update Manager](./overview.md#prerequisites). +1. Patch orchestration of the Azure machines should be set to **Customer Managed Schedules**. For more information, see [Enable schedule patching on existing VMs](prerequsite-for-schedule-patching.md#enable-schedule-patching-on-azure-vms). For Azure Arc-enabled machines, it isn't a requirement. -1. See [Prerequisites for Update Manager](./overview.md#prerequisites) -1. Patch orchestration of the Azure machines should be set to **Customer Managed Schedules**. For more information, see [how to enable schedule patching on existing VMs](prerequsite-for-schedule-patching.md#enable-schedule-patching-on-azure-vms). For Azure Arc-enabled machines, it isn't a requirement. - > [!Note] - > If you set the patch mode to Azure orchestrated (AutomaticByPlatform) but do not enable the **BypassPlatformSafetyChecksOnUserSchedule** flag and do not attach a maintenance configuration to an Azure machine, it is treated as [Automatic Guest patching](../virtual-machines/automatic-vm-guest-patching.md) enabled machine and Azure platform will automatically install updates as per its own schedule. [Learn more](./overview.md#prerequisites). + > [!NOTE] + > If you set the patch mode to **Azure orchestrated** (`AutomaticByPlatform`) but do not enable the **BypassPlatformSafetyChecksOnUserSchedule** flag and do not attach a maintenance configuration to an Azure machine, it's treated as an [automatic guest patching](../virtual-machines/automatic-vm-guest-patching.md)-enabled machine. The Azure platform automatically installs updates according to its own schedule. [Learn more](./overview.md#prerequisites). ## Schedule patching in an availability set -1. All VMs in a common [availability set](../virtual-machines/availability-set-overview.md) aren't updated concurrently. -1. VMs in a common availability set are updated within Update Domain boundaries and, VMs across multiple Update Domains aren't updated concurrently. +All VMs in a common [availability set](../virtual-machines/availability-set-overview.md) aren't updated concurrently. ++VMs in a common availability set are updated within Update Domain boundaries. VMs across multiple Update Domains aren't updated concurrently. ## Configure reboot settings -The registry keys listed in [Configuring Automatic Updates by editing the registry](/windows/deployment/update/waas-wu-settings#configuring-automatic-updates-by-editing-the-registry) and [Registry keys used to manage restart](/windows/deployment/update/waas-restart#registry-keys-used-to-manage-restart) can cause your machines to reboot, even if you specify **Never Reboot** in the **Schedule** settings. Configure these registry keys to best suit your environment. +The registry keys listed in [Configure Automatic Updates by editing the registry](/windows/deployment/update/waas-wu-settings#configuring-automatic-updates-by-editing-the-registry) and [Registry keys used to manage restart](/windows/deployment/update/waas-restart#registry-keys-used-to-manage-restart) can cause your machines to reboot. A reboot can occur even if you specify **Never Reboot** in the **Schedule** settings. Configure these registry keys to best suit your environment. ## Service limits -The following are the recommended limits for the mentioned indicators: +We recommend the following limits for the indicators. | Indicator | Limit | |-|-|-| Number of schedules per Subscription per Region | 250 | -| Total number of Resource associations to a schedule | 3000 | -| Resource associations on each dynamic scope | 1000 | -| Number of dynamic scopes per Resource Group or Subscription per Region | 250 | ---## Schedule recurring updates on single VM +| Number of schedules per subscription per region | 250 | +| Total number of resource associations to a schedule | 3,000 | +| Resource associations on each dynamic scope | 1,000 | +| Number of dynamic scopes per resource group or subscription per region | 250 | ->[!NOTE] -> You can schedule updates from the Overview or Machines blade in Update Manager page or from the selected VM. +## Schedule recurring updates on a single VM -# [From Overview blade](#tab/schedule-updates-single-overview) +You can schedule updates from the **Overview** or **Machines** pane on the **Update Manager** page or from the selected VM. -To schedule recurring updates on a single VM, follow these steps: +To schedule recurring updates on a single VM: 1. Sign in to the [Azure portal](https://portal.azure.com). -1. In **Azure Update Manager**, **Overview**, select your **Subscription**, and select **Schedule updates**. +1. On the **Azure Update Manager** | **Overview** page, select your subscription, and then select **Schedule updates**. -1. In **Create new maintenance configuration**, you can create a schedule for a single VM. +1. On the **Create new maintenance configuration** page, you can create a schedule for a single VM. - > [!Note] - > Currently, VMs and maintenance configuration in the same subscription are supported. + Currently, VMs and maintenance configuration in the same subscription are supported. -1. In the **Basics** page, select **Subscription**, **Resource Group** and all options in **Instance details**. - - Select the **Maintenance scope** as *Guest (Azure VM, Arc-enabled VMs/servers)*. - - Select **Add a schedule** and in **Add/Modify schedule**, specify the schedule details such as: +1. On the **Basics** page, select **Subscription**, **Resource Group**, and all options in **Instance details**. + - Select **Maintenance scope** as **Guest (Azure VM, Azure Arc-enabled VMs/servers)**. + - Select **Add a schedule**. In **Add/Modify schedule**, specify the schedule details, such as: - - Start on - - Maintenance window (in hours) - > [!NOTE] - > The upper maintenance window is 3 hours 55 mins. - - Repeats (monthly, daily or weekly) - - Add end date - - Schedule summary + - **Start on** + - **Maintenance window** (in hours). The upper maintenance window is 3 hours 55 minutes. + - **Repeats** (monthly, daily, or weekly) + - **Add end date** + - **Schedule summary** - > [!NOTE] - > The hourly option is currently not supported in the portal, but can be used through the [API](./manage-vms-programmatically.md#create-a-maintenance-configuration-schedule). -- :::image type="content" source="./media/scheduled-updates/scheduled-patching-basics-page.png" alt-text="Scheduled patching basics page."::: - - - For the Repeats-monthly, there are two options: + The hourly option isn't supported in the portal but can be used through the [API](./manage-vms-programmatically.md#create-a-maintenance-configuration-schedule). - - Repeat on a calendar date (optionally run on last date of the month) - - Repeat on nth (first, second, etc.) x day (for example, Monday, Tuesday) of the month. You can also specify an offset from the day set. It could be +6/-6. For example, for customers who want to patch on the first Saturday after a patch on Tuesday, they would set the recurrence as the second Tuesday of the month with a +4 day offset. Optionally you can also specify an end date when you want the schedule to expire. + :::image type="content" source="./media/scheduled-updates/scheduled-patching-basics-page.png" alt-text="Screenshot that shows the Scheduled patching basics page."::: -1. In the **Machines** page, select your machine and select **Next** to continue. + For **Repeats monthly**, there are two options: -1. In the **Updates** page, specify the updates to include in the deployment such as update classification(s) or KB ID/ packages that must be installed when you trigger your schedule. + - Repeat on a calendar date (optionally run on the last date of the month). + - Repeat on nth (first, second, etc.) x day (for example, Monday, Tuesday) of the month. You can also specify an offset from the day set. It could be +6/-6. For example, if you want to patch on the first Saturday after a patch on Tuesday, set the recurrence as the second Tuesday of the month with a +4 day offset. Optionally, you can also specify an end date when you want the schedule to expire. - > [!Note] - > Update Manager doesn't support driver updates. +1. On the **Machines** tab, select your machine, and then select **Next**. -1. In the **Tags** page, assign tags to maintenance configurations. + Update Manager doesn't support driver updates. -1. In the **Review + Create** page, verify your update deployment options and select **Create**. +1. On the **Tags** tab, assign tags to maintenance configurations. +1. On the **Review + create** tab, verify your update deployment options, and then select **Create**. -# [From Machines blade](#tab/schedule-updates-single-machine) +# [From the Machines pane](#tab/schedule-updates-single-machine) 1. Sign in to the [Azure portal](https://portal.azure.com). -1. In **Azure Update Manager**, **Machines**, select your **Subscription**, select your machine and select **Schedule updates**. --1. In **Create new maintenance configuration**, you can create a schedule for a single VM, assign machine and tags. Follow the procedure from step 3 listed in **From Overview blade** of [Schedule recurring updates on single VM](#schedule-recurring-updates-on-single-vm) to create a maintenance configuration and assign a schedule. +1. On the **Azure Update Manager** | **Machines** page, select your subscription, select your machine, and then select **Schedule updates**. +1. In **Create new maintenance configuration**, you can create a schedule for a single VM and assign a machine and tags. Follow the procedure from step 3 listed in **From the Overview pane** of [Schedule recurring updates on a single VM](#schedule-recurring-updates-on-a-single-vm) to create a maintenance configuration and assign a schedule. # [From a selected VM](#tab/singlevm-schedule-home) -1. Select your virtual machine and the **virtual machines | Updates** page opens. +1. Select your virtual machine to open the **Virtual machines | Updates** page. 1. Under **Operations**, select **Updates**.-1. In **Updates**, select **Go to Updates using Update Center**. -1. In **Updates preview**, select **Schedule updates** and in **Create new maintenance configuration**, you can create a schedule for a single VM. Follow the procedure from step 3 listed in **From Overview blade** of [Schedule recurring updates on single VM](#schedule-recurring-updates-on-single-vm) to create a maintenance configuration and assign a schedule. +1. On the **Updates** tab, select **Go to Updates using Update Center**. +1. In **Updates preview**, select **Schedule updates**. In **Create new maintenance configuration**, you can create a schedule for a single VM. Follow the procedure from step 3 listed in **From the Overview pane** of [Schedule recurring updates on a single VM](#schedule-recurring-updates-on-a-single-vm) to create a maintenance configuration and assign a schedule. --A notification appears that the deployment has been created. -+A notification confirms that the deployment was created. ## Schedule recurring updates at scale -To schedule recurring updates at scale, follow these steps: +To schedule recurring updates at scale, follow these steps. ++You can schedule updates from the **Overview** or **Machines** pane. ->[!NOTE] -> You can schedule updates from the Overview or Machines blade. +# [From the Overview pane](#tab/schedule-updates-scale-overview) -# [From Overview blade](#tab/schedule-updates-scale-overview) - 1. Sign in to the [Azure portal](https://portal.azure.com). -1. In **Azure Update Manager**, **Overview**, select your **Subscription** and select **Schedule updates**. +1. On the **Azure Update Manager** | **Overview** page, select your subscription, and then select **Schedule updates**. -1. In the **Create new maintenance configuration** page, you can create a schedule for multiple machines. +1. On the **Create new maintenance configuration** page, you can create a schedule for multiple machines. - > [!Note] - > Currently, VMs and maintenance configuration in the same subscription are supported. + Currently, VMs and maintenance configuration in the same subscription are supported. -1. In the **Basics** page, select **Subscription**, **Resource Group** and all options in **Instance details**. - - Select **Add a schedule** and in **Add/Modify schedule**, specify the schedule details such as: +1. On the **Basics** tab, select **Subscription**, **Resource Group**, and all options in **Instance details**. + - Select **Add a schedule**. In **Add/Modify schedule**, specify the schedule details, such as: - - Start on - - Maintenance window (in hours) - - Repeats(monthly, daily or weekly) - - Add end date - - Schedule summary -- > [!NOTE] - > The hourly option is currently not supported in the portal, but can be used through the [API](./manage-vms-programmatically.md#create-a-maintenance-configuration-schedule). --1. In the **Machines** page, verify if the selected machines are listed. You can add or remove machines from the list. Select **Next** to continue. + - **Start on** + - **Maintenance window** (in hours) + - **Repeats** (monthly, daily, or weekly) + - **Add end date** + - **Schedule summary** -1. In the **Updates** page, specify the updates to include in the deployment such as update classification(s) or KB ID/ packages that must be installed when you trigger your schedule. + The hourly option isn't supported in the portal but can be used through the [API](./manage-vms-programmatically.md#create-a-maintenance-configuration-schedule). - > [!Note] - > Update Manager doesn't support driver updates. +1. On the **Machines** tab, verify if the selected machines are listed. You can add or remove machines from the list. Select **Next**. +1. On the **Updates** tab, specify the updates to include in the deployment, such as update classifications or KB ID/packages that must be installed when you trigger your schedule. -1. In the **Tags** page, assign tags to maintenance configurations. + Update Manager doesn't support driver updates. -1. In the **Review + Create** page, verify your update deployment options and select **Create**. +1. On the **Tags** tab, assign tags to maintenance configurations. +1. On the **Review + create** tab, verify your update deployment options, and then select **Create**. -# [From Machines blade](#tab/schedule-updates-scale-machine) +# [From the Machines pane](#tab/schedule-updates-scale-machine) 1. Sign in to the [Azure portal](https://portal.azure.com). -1. In **Azure Update Manager**, **Machines**, select your **Subscription**, select your machines and select **Schedule updates**. +1. On the **Azure Update Manager** | **Machines** page, select your subscription, select your machines, and then select **Schedule updates**. -In **Create new maintenance configuration**, you can create a schedule for a single VM. Follow the procedure from step 3 listed in **From Overview blade** of [Schedule recurring updates on single VM](#schedule-recurring-updates-on-single-vm) to create a maintenance configuration and assign a schedule. +On the **Create new maintenance configuration** page, you can create a schedule for a single VM. Follow the procedure from step 3 listed in **From the Overview pane** of [Schedule recurring updates on a single VM](#schedule-recurring-updates-on-a-single-vm) to create a maintenance configuration and assign a schedule. +A notification confirms that the deployment was created. -A notification appears that the deployment is created. +## Attach a maintenance configuration + A maintenance configuration can be attached to multiple machines. It can be attached to machines at the time of creating a new maintenance configuration or even after you create one. - ## Attach a maintenance configuration - A maintenance configuration can be attached to multiple machines. It can be attached to machines at the time of creating a new maintenance configuration or even after you've created one. + 1. On the **Azure Update Manager** page, select **Machines**, and then select your subscription. + 1. Select your machine, and on the **Updates** pane, select **Scheduled updates** to create a maintenance configuration or attach an existing maintenance configuration to the scheduled recurring updates. +1. On the **Scheduling** tab, select **Attach maintenance configuration**. +1. Select the maintenance configuration that you want to attach, and then select **Attach**. +1. On the **Updates** pane, select **Scheduling** > **Attach maintenance configuration**. +1. On the **Attach existing maintenance configuration** page, select the maintenance configuration that you want to attach, and then select **Attach**. - 1. In **Azure Update Manager**, select **Machines** and select your **Subscription**. - 1. Select your machine and in **Updates**, select **Scheduled updates** to create a maintenance configuration or attach existing maintenance configuration to the scheduled recurring updates. -1. In **Scheduling**, select **Attach maintenance configuration**. -1. Select the maintenance configuration that you would want to attach and select **Attach**. -1. In **Updates**, select **Scheduling** and **+Attach maintenance configuration**. -1. In the **Attach existing maintenance configuration** page, select the maintenance configuration that you want to attach and select **Attach**. + :::image type="content" source="./media/scheduled-updates/scheduled-patching-attach-maintenance-inline.png" alt-text="Screenshot that shows Scheduled patching attach maintenance configuration." lightbox="./media/scheduled-updates/scheduled-patching-attach-maintenance-expanded.png"::: - :::image type="content" source="./media/scheduled-updates/scheduled-patching-attach-maintenance-inline.png" alt-text="Scheduled patching attach maintenance configuration." lightbox="./media/scheduled-updates/scheduled-patching-attach-maintenance-expanded.png"::: - ## Schedule recurring updates from maintenance configuration -You can browse and manage all your maintenance configurations from a single place. +You can browse and manage all your maintenance configurations from a single place. -1. Search **Maintenance configurations** in the Azure portal. It shows a list of all maintenance configurations along with the maintenance scope, resource group, location, and the subscription to which it belongs. +1. Search **Maintenance configurations** in the Azure portal. It shows a list of all maintenance configurations along with the maintenance scope, resource group, location, and the subscription to which it belongs. -1. You can filter maintenance configurations using filters at the top. Maintenance configurations related to Guest OS updates are the ones that have Maintenance scope as **InGuestPatch**. +1. You can filter maintenance configurations by using filters at the top. Maintenance configurations related to guest OS updates are the ones that have maintenance scope as **InGuestPatch**. -You can create a new Guest OS update maintenance configuration or modify an existing configuration: -+You can create a new guest OS update maintenance configuration or modify an existing configuration. ### Create a new maintenance configuration 1. Go to **Machines** and select machines from the list.-1. In the **Updates**, select **Scheduled updates**. -1. In **Create a maintenance configuration**, follow step 3 in this [procedure](#schedule-recurring-updates-on-single-vm) to create a maintenance configuration. -1. In **Basics** tab, select the **Maintenance scope** as *Guest (Azure VM, Arc-enabled VMs/servers)*. +1. On the **Updates** pane, select **Scheduled updates**. +1. On the **Create a maintenance configuration** pane, follow step 3 in this [procedure](#schedule-recurring-updates-on-a-single-vm) to create a maintenance configuration. +1. On the **Basics** tab, select the **Maintenance scope** as **Guest (Azure VM, Arc-enabled VMs/servers)**. - :::image type="content" source="./media/scheduled-updates/create-maintenance-configuration.png" alt-text="Create Maintenance configuration."::: - + :::image type="content" source="./media/scheduled-updates/create-maintenance-configuration.png" alt-text="Screenshot that shows creating a maintenance configuration."::: -### Add/remove machines from maintenance configuration +### Add or remove machines from maintenance configuration 1. Go to **Machines** and select the machines from the list.-1. In **Updates** page, select **One-time updates**. -1. In **Install one-time updates**, **Machines**, select **+Add machine**. +1. On the **Updates** page, select **One-time updates**. +1. On the **Install one-time updates** pane, select **Machines** > **Add machine**. - :::image type="content" source="./media/scheduled-updates/add-or-remove-machines-from-maintenance-configuration-inline.png" alt-text="Add/remove machines from Maintenance configuration." lightbox="./media/scheduled-updates/add-or-remove-machines-from-maintenance-configuration-expanded.png"::: - + :::image type="content" source="./media/scheduled-updates/add-or-remove-machines-from-maintenance-configuration-inline.png" alt-text="Screenshot that shows adding or removing machines from maintenance configuration." lightbox="./media/scheduled-updates/add-or-remove-machines-from-maintenance-configuration-expanded.png"::: ### Change update selection criteria -1. In **Install one-time updates**, select the resources and machines to install the updates. -1. In **Machines**, select **+Add machine** to add machines that were previously not selected and click **Add**. -1. In **Updates**, specify the updates to include in the deployment. -1. Select **Include KB ID/package** and **Exclude KB ID/package** respectively to select category of updates like Critical, Security, Feature updates etc. +1. On the **Install one-time updates** pane, select the resources and machines to install the updates. +1. On the **Machines** tab, select **Add machine** to add machines that weren't previously selected, and then select **Add**. +1. On the **Updates** tab, specify the updates to include in the deployment. +1. Select **Include KB ID/package** and **Exclude KB ID/package**, respectively, to select updates like **Critical**, **Security**, and **Feature updates**. - :::image type="content" source="./media/scheduled-updates/change-update-selection-criteria-of-maintenance-configuration-inline.png" alt-text="Change update selection criteria of Maintenance configuration." lightbox="./media/scheduled-updates/change-update-selection-criteria-of-maintenance-configuration-expanded.png"::: + :::image type="content" source="./media/scheduled-updates/change-update-selection-criteria-of-maintenance-configuration-inline.png" alt-text="Screenshot that shows changing update selection criteria of Maintenance configuration." lightbox="./media/scheduled-updates/change-update-selection-criteria-of-maintenance-configuration-expanded.png"::: -## Onboarding to Schedule using Policy +## Onboard to schedule by using Azure Policy -The Azure update Manager allows you to target a group of Azure or non-Azure VMs for update deployment via Azure Policy. The grouping using policy, keeps you from having to edit your deployment to update machines. You can use subscription, resource group, tags or regions to define the scope and use this feature for the built-in policies which you can customize as per your use-case. +Update Manager allows you to target a group of Azure or non-Azure VMs for update deployment via Azure Policy. The grouping using a policy keeps you from having to edit your deployment to update machines. You can use subscription, resource group, tags, or regions to define the scope. You can use this feature for the built-in policies, which you can customize according to your use case. > [!NOTE]-> This policy also ensures that the patch orchestration property for Azure machines is set to **Customer Managed Schedules** as it is a prerequisite for scheduled patching. -+> This policy also ensures that the patch orchestration property for Azure machines is set to **Customer Managed Schedules** because it's a prerequisite for scheduled patching. ### Assign a policy -Policy allows you to assign standards and assess compliance at scale. [Learn more](../governance/policy/overview.md). To assign a policy to scope, follow these steps: +Azure Policy allows you to assign standards and assess compliance at scale. For more information, see [Overview of Azure Policy](../governance/policy/overview.md). To assign a policy to scope: 1. Sign in to the [Azure portal](https://portal.azure.com) and select **Policy**.-1. In **Assignments**, select **Assign policy**. -1. Under **Basics**, in the **Assign policy** page: - - In **Scope**, choose your subscription, resource group, and choose **Select**. +1. Under **Assignments**, select **Assign policy**. +1. On the **Assign policy** page, on the **Basics** tab: + - For **Scope**, choose your subscription and resource group and choose **Select**. - Select **Policy definition** to view a list of policies.- - In **Available Definitions**, select **Built in** for Type and in search, enter - *Schedule recurring updates using Azure Update Manager* and click **Select**. + - On the **Available Definitions** pane, select **Built in** for **Type**. In **Search**, enter **Schedule recurring updates using Azure Update Manager** and click **Select**. - :::image type="content" source="./media/scheduled-updates/dynamic-scoping-defintion.png" alt-text="Screenshot that shows on how to select the definition."::: + :::image type="content" source="./media/scheduled-updates/dynamic-scoping-defintion.png" alt-text="Screenshot that shows how to select the definition."::: - - Ensure that **Policy enforcement** is set to **Enabled** and select **Next**. -1. In **Parameters**, by default, only the Maintenance configuration ARM ID is visible. + - Ensure that **Policy enforcement** is set to **Enabled**, and then select **Next**. +1. On the **Parameters** tab, by default, only the **Maintenance configuration ARM ID** is visible. - >[!NOTE] - > If you do not specify any other parameters, all machines in the subscription and resource group that you selected in **Basics** will be covered under scope. However, if you want to scope further based on resource group, location, OS, tags and so on, deselect **Only show parameters that need input or review** to view all parameters. + If you don't specify any other parameters, all machines in the subscription and resource group that you selected on the **Basics** tab are covered under scope. If you want to scope further based on resource group, location, OS, tags, and so on, clear **Only show parameters that need input or review** to view all parameters: - - Maintenance Configuration ARM ID: A mandatory parameter to be provided. It denotes the ARM ID of the schedule that you want to assign to the machines. - - Resource groups: You can specify a resource group optionally if you want to scope it down to a resource group. By default, all resource groups within the subscription are selected. - - Operating System types: You can select Windows or Linux. By default, both are preselected. - - Machine locations: You can optionally specify the regions that you want to select. By default, all are selected. - - Tags on machines: You can use tags to scope down further. By default, all are selected. - - Tags operator: In case you have selected multiple tags, you can specify if you want the scope to be machines that have all the tags or machines which have any of those tags. + - **Maintenance Configuration ARM ID**: A mandatory parameter to be provided. It denotes the Azure Resource Manager (ARM) ID of the schedule that you want to assign to the machines. + - **Resource groups**: You can optionally specify a resource group if you want to scope it down to a resource group. By default, all resource groups within the subscription are selected. + - **Operating System types**: You can select Windows or Linux. By default, both are preselected. + - **Machine locations**: You can optionally specify the regions that you want to select. By default, all are selected. + - **Tags on machines**: You can use tags to scope down further. By default, all are selected. + - **Tags operator**: If you select multiple tags, you can specify if you want the scope to be machines that have all the tags or machines that have any of those tags. - :::image type="content" source="./media/scheduled-updates/dynamic-scoping-assign-policy.png" alt-text="Screenshot that shows on how to assign a policy."::: + :::image type="content" source="./media/scheduled-updates/dynamic-scoping-assign-policy.png" alt-text="Screenshot that shows how to assign a policy."::: -1. In **Remediation**, **Managed Identity**, **Type of Managed Identity**, select System assigned managed identity and **Permissions** is already set as *Contributor* according to the policy definition. +1. On the **Remediation** tab, in **Managed Identity** > **Type of Managed Identity**, select **System assigned managed identity**. **Permissions** is already set as **Contributor** according to the policy definition. - >[!NOTE] - > If you select Remediation, the policy would be effective on all the existing machines in the scope else, it is assigned to any new machine which is added to the scope. + If you select **Remediation**, the policy is in effect on all the existing machines in the scope or else it's assigned to any new machine that's added to the scope. -1. In **Review + Create**, verify your selections, and select **Create** to identify the non-compliant resources to understand the compliance state of your environment. +1. On the **Review + create** tab, verify your selections, and then select **Create** to identify the noncompliant resources to understand the compliance state of your environment. -### View Compliance +### View compliance To view the current compliance state of your existing resources: 1. In **Policy Assignments**, select **Scope** to select your subscription and resource group.-1. In **Definition type**, select policy and in the list, select the assignment name. -1. Select **View compliance**. The Resource Compliance lists the machines and reasons for failure. +1. In **Definition type**, select the policy. In the list, select the assignment name. +1. Select **View compliance**. **Resource compliance** lists the machines and reasons for failure. - :::image type="content" source="./media/scheduled-updates/dynamic-scoping-policy-compliance.png" alt-text="Screenshot that shows on policy compliance."::: + :::image type="content" source="./media/scheduled-updates/dynamic-scoping-policy-compliance.png" alt-text="Screenshot that shows policy compliance."::: ## Check your scheduled patching run-You can check the deployment status and history of your maintenance configuration runs from the Update Manager portal. Follow [Update deployment history by maintenance run ID](./manage-multiple-machines.md#update-deployment-history-by-maintenance-run-id). ++You can check the deployment status and history of your maintenance configuration runs from the Update Manager portal. For more information, see [Update deployment history by maintenance run ID](./manage-multiple-machines.md#update-deployment-history-by-maintenance-run-id). ## Next steps -* To view update assessment and deployment logs generated by Update Manager, see [query logs](query-logs.md). -* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager. +* To view update assessment and deployment logs generated by Update Manager, see [Query logs](query-logs.md). +* To troubleshoot issues, see [Troubleshoot Update Manager](troubleshoot.md). |
update-center | Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/support-matrix.md | Title: Azure Update Manager support matrix -description: Provides a summary of supported regions and operating system settings. +description: This article provides a summary of supported regions and operating system settings. -This article details the Windows and Linux operating systems supported and system requirements for machines or servers managed by Update Manager including the supported regions and specific versions of the Windows Server and Linux operating systems running on Azure VMs or machines managed by Arc-enabled servers. +This article details the Windows and Linux operating systems supported and system requirements for machines or servers managed by Azure Update Manager. The article includes the supported regions and specific versions of the Windows Server and Linux operating systems running on Azure virtual machines (VMs) or machines managed by Azure Arc-enabled servers. ## Update sources supported -**Windows**: [Windows Update Agent (WUA)](/windows/win32/wua_sdk/updating-the-windows-update-agent) reports to Microsoft Update by default, but you can configure it to report to [Windows Server Update Services (WSUS)](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus). If you configure WUA to report to WSUS, based on the WSUS's last synchronization with Microsoft update, the results in the Update Manager might differ to what the Microsoft update shows. You can specify sources for scanning and downloading updates using [specify intranet Microsoft Update service location](/windows/deployment/update/waas-wu-settings?branch=main#specify-intranet-microsoft-update-service-location). To restrict machines to the internal update service, see [Do not connect to any Windows Update Internet locations](/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates?branch=main#do-not-connect-to-any-windows-update-internet-locations) +**Windows**: [Windows Update Agent (WUA)](/windows/win32/wua_sdk/updating-the-windows-update-agent) reports to Microsoft Update by default, but you can configure it to report to [Windows Server Update Services (WSUS)](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus). If you configure WUA to report to WSUS, based on the last synchronization from WSUS with Microsoft Update, the results in Update Manager might differ from what Microsoft Update shows. ++To specify sources for scanning and downloading updates, see [Specify intranet Microsoft Update service location](/windows/deployment/update/waas-wu-settings?branch=main#specify-intranet-microsoft-update-service-location). To restrict machines to the internal update service, see [Do not connect to any Windows Update internet locations](/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates?branch=main#do-not-connect-to-any-windows-update-internet-locations). **Linux**: You can configure Linux machines to report to a local or public YUM or APT package repository. The results shown in Update Manager depend on where the machines are configured to report. ## Types of updates supported +The following types of updates are supported. + ### Operating system updates+ Update Manager supports operating system updates for both Windows and Linux. -> [!NOTE] -> Update Manager doesn't support driver Updates. +Update Manager doesn't support driver updates. -### First party updates on Windows -By default, the Windows Update client is configured to provide updates only for Windows operating system. If you enable the **Give me updates for other Microsoft products when I update Windows** setting, you also receive updates for other Microsoft products, including security patches for Microsoft SQL Server and other Microsoft software. +### First-party updates on Windows ++By default, the Windows Update client is configured to provide updates only for the Windows operating system. If you enable the **Give me updates for other Microsoft products when I update Windows** setting, you also receive updates for other Microsoft products. Updates include security patches for Microsoft SQL Server and other Microsoft software. Use one of the following options to perform the settings change at scale: -- For Servers configured to patch on a schedule from Update Manager (that has the VM PatchSettings set to AutomaticByPlatform = Azure-Orchestrated), and for all Windows Servers running on an earlier operating system than server 2016, Run the following PowerShell script on the server you want to change.+- For servers configured to patch on a schedule from Update Manager (with VM `PatchSettings` set to `AutomaticByPlatform = Azure-Orchestrated`), and for all Windows Servers running on an earlier operating system than Windows Server 2016, run the following PowerShell script on the server you want to change: ```powershell $ServiceManager = (New-Object -com "Microsoft.Update.ServiceManager") Use one of the following options to perform the settings change at scale: $ServiceID = "7971f918-a847-4430-9279-4a52d1efe18d" $ServiceManager.AddService2($ServiceId,7,"") ```-- For servers running Server 2016 or later which aren't using Update Manager scheduled patching (that has the VM PatchSettings set to AutomaticByOS = Azure-Orchestrated) you can use Group Policy to control this by downloading and using the latest Group Policy [Administrative template files](/troubleshoot/windows-client/group-policy/create-and-manage-central-store).++- For servers running Windows Server 2016 or later that aren't using Update Manager scheduled patching (with VM `PatchSettings` set to `AutomaticByOS = Azure-Orchestrated`), you can use Group Policy to control this process by downloading and using the latest Group Policy [Administrative template files](/troubleshoot/windows-client/group-policy/create-and-manage-central-store). > [!NOTE]-> Run the following PowerShell script on the server to disable first party updates. +> Run the following PowerShell script on the server to disable first-party updates: +> > ```powershell > $ServiceManager = (New-Object -com "Microsoft.Update.ServiceManager") > $ServiceManager.Services Use one of the following options to perform the settings change at scale: **Windows**: Update Manager relies on the locally configured update repository to update supported Windows systems, either WSUS or Windows Update. Tools such as [System Center Updates Publisher](/mem/configmgr/sum/tools/updates-publisher) allow you to import and publish custom updates with WSUS. This scenario allows Update Manager to update machines that use Configuration Manager as their update repository with third-party software. To learn how to configure Updates Publisher, see [Install Updates Publisher](/mem/configmgr/sum/tools/install-updates-publisher). -**Linux**: If you include a specific third party software repository in the Linux package manager repository location, it's scanned when it performs software update operations. The package won't be available for assessment and installation if you remove it. --> [!NOTE] -> Update Manager does not support managing the Microsoft Configuration Manager client. +**Linux**: If you include a specific third-party software repository in the Linux package manager repository location, it's scanned when it performs software update operations. The package isn't available for assessment and installation if you remove it. +Update Manager doesn't support managing the Configuration Manager client. ## Supported regions -Update Manager will scale to all regions for both Azure VMs and Azure Arc-enabled servers. Listed below are the Azure public cloud where you can use Update Manager. +Update Manager scales to all regions for both Azure VMs and Azure Arc-enabled servers. The following table lists the Azure public cloud where you can use Update Manager. -# [Azure virtual machine](#tab/azurevm) +# [Azure VMs](#tab/azurevm) Azure Update Manager is available in all Azure public regions where compute virtual machines are available. # [Azure Arc-enabled servers](#tab/azurearc)-Azure Update Manager is supported in the following regions currently. It implies that VMs must be in below regions: -**Geography** | **Supported Regions** +Azure Update Manager is currently supported in the following regions. It implies that VMs must be in the following regions. ++**Geography** | **Supported regions** | Africa | South Africa North Asia Pacific | East Asia </br> South East Asia United States | Central US </br> East US </br> East US 2</br> North Central US < ## Supported operating systems -> [!NOTE] -> - All operating systems are assumed to be x64. x86 isn't supported for any operating system. -> - Update Manager doesn't support CIS hardened images. +All operating systems are assumed to be x64. For this reason, x86 isn't supported for any operating system. +Update Manager doesn't support CIS-hardened images. # [Azure VMs](#tab/azurevm-os) > [!NOTE]-> Currently, Azure Update Manager has the following limitation regarding the operating system support: -> - [Specialized images](../virtual-machines/linux/imaging.md#specialized-images) and **VMs created by Azure Migrate, Azure Backup, Azure Site Recovery** aren't fully supported for now. However, you can **use on-demand operations such as one-time update and check for updates** in Update Manager. +> Currently, Azure Update Manager has the following limitation regarding the operating system support: +> +> - [Specialized images](../virtual-machines/linux/imaging.md#specialized-images) and **VMs created by Azure Migrate, Azure Backup, and Azure Site Recovery** aren't fully supported for now. However, you can **use on-demand operations such as one-time update and check for updates** in Update Manager. >-> For the above limitation, we recommend that you use [Automation Update management](../automation/update-management/overview.md) till the support is available in Update Manager. +> For the preceding limitation, we recommend that you use [Automation Update Management](../automation/update-management/overview.md) until support is available in Update Manager. +### Azure Marketplace/PIR images -### Marketplace/PIR images +The Azure Marketplace image has the following attributes: -The Marketplace image in Azure has the following attributes: -- **Publisher** - The organization that creates the image. Examples: Canonical, MicrosoftWindowsServer-- **Offer**- The name of the group of related images created by the publisher. Examples: UbuntuServer, WindowsServer-- **SKU**- An instance of an offer, such as a major release of a distribution. Examples: 18.04LTS, 2019-Datacenter-- **Version** - The version number of an image SKU.+- **Publisher**: The organization that creates the image. Examples are `Canonical` and `MicrosoftWindowsServer`. +- **Offer**: The name of the group of related images created by the publisher. Examples are `UbuntuServer` and `WindowsServer`. +- **SKU**: An instance of an offer, such as a major release of a distribution. Examples are `18.04LTS` and `2019-Datacenter`. +- **Version**: The version number of an image SKU. -Azure Update Manager supports the following operating system versions. However, you could experience failures if there are any configuration changes on the VMs such as package or repository. +Update Manager supports the following operating system versions. You might experience failures if there are any configuration changes on the VMs, such as package or repository. #### Windows operating systems -| **Publisher**| **Versions(s)** +| **Publisher**| **Versions** |-|-| |Microsoft Windows Server | 1709, 1803, 1809, 2012, 2016, 2019, 2022| |Microsoft Windows Server HPC Pack | 2012, 2016, 2019 | |Microsoft SQL Server | 2008, 2012, 2014, 2016, 2017, 2019, 2022 | |Microsoft Visual Studio | ws2012r2, ws2016, ws2019, ws2022 | |Microsoft Azure Site Recovery | Windows 2012-|Microsoft Biz Talk Server | 2016, 2020 | +|Microsoft BizTalk Server | 2016, 2020 | |Microsoft DynamicsAx | ax7 | |Microsoft Power BI | 2016, 2017, 2019, 2022 |-|Microsoft Sharepoint | sp* | +|Microsoft SharePoint | sp* | #### Linux operating systems -| **Publisher**| **Versions(s)** +| **Publisher**| **Versions** |-|-| |Canonical | Ubuntu 16.04, 18.04, 20.04, 22.04 |-|RedHat | RHEL 7,8,9| -|Openlogic | CentOS 7| +|Red Hat | RHEL 7,8,9| +|OpenLogic | CentOS 7| |SUSE 12 |sles, sles-byos, sap, sap-byos, sapcal, sles-standard | |SUSE 15 | basic, hpc, opensuse, sles, sap, sapcal| |Oracle Linux | 7*, ol7*, ol8*, ol9* | |Oracle Database | 21, 19-0904, 18.*| -#### Unsupported Operating systems +#### Unsupported operating systems -The following table lists the operating systems for marketplace images that aren't supported: +The following table lists the operating systems for Azure Marketplace images that aren't supported. -| **Publisher**| **OS Offer** | **SKU**| +| **Publisher**| **OS offer** | **SKU**| |-|-|--| |OpenLogic | CentOS | 8* | |OpenLogic | centos-hpc| * | The following table lists the operating systems for marketplace images that aren ### Custom images -We support [generalized](../virtual-machines/linux/imaging.md#generalized-images) custom images. Table below lists the operating systems that we support for generalized images. Refer to [custom images (preview)](manage-updates-customized-images.md) for instructions on how to start using Update Manager to manage updates on custom images. +We support [generalized](../virtual-machines/linux/imaging.md#generalized-images) custom images. The following table lists the operating systems that we support for generalized images. For instructions on how to start using Update Manager to manage updates on custom images, see [Custom images (preview)](manage-updates-customized-images.md). - |**Windows Operating System**| + |**Windows operating system**| || |Windows Server 2022| |Windows Server 2019| We support [generalized](../virtual-machines/linux/imaging.md#generalized-images |Windows Server 2012| |Windows Server 2008 R2 (RTM and SP1 Standard)| -- |**Linux Operating System**| + |**Linux operating system**| || |CentOS 7, 8| |Oracle Linux 7.x, 8x| We support [generalized](../virtual-machines/linux/imaging.md#generalized-images |SUSE Linux Enterprise Server 12.x, 15.0-15.4| |Ubuntu 16.04 LTS, 18.04 LTS, 20.04 LTS, 22.04 LTS| - # [Azure Arc-enabled servers](#tab/azurearc-os) -The table lists the operating systems supported on [Azure Arc-enabled servers](../azure-arc/servers/overview.md) are: +The following table lists the operating systems supported on [Azure Arc-enabled servers](../azure-arc/servers/overview.md). - |**Operating System**| + |**Operating system**| |-| | Amazon Linux 2023 | | Windows Server 2012 R2 and higher (including Server Core) | The table lists the operating systems supported on [Azure Arc-enabled servers](. -## Unsupported Operating systems +## Unsupported operating systems -The following table lists the operating systems that aren't supported: +The following table lists the operating systems that aren't supported. - | **Operating system**| **Notes** + | **Operating system**| **Notes** |-|-| | Windows client | For client operating systems such as Windows 10 and Windows 11, we recommend [Microsoft Intune](/mem/intune/) to manage updates.| | Virtual machine scale sets| We recommend that you use [Automatic upgrades](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md) to patch the virtual machine scale sets.|- | Azure Kubernetes Nodes| We recommend the patching described in [Apply security and kernel updates to Linux nodes in Azure Kubernetes Service (AKS)](/azure/aks/node-updates-kured).| + | Azure Kubernetes Service nodes| We recommend the patching described in [Apply security and kernel updates to Linux nodes in Azure Kubernetes Service (AKS)](/azure/aks/node-updates-kured).| --As the Azure Update Manager depends on your machine's OS package manager or update service, ensure that the Linux package manager, or Windows Update client are enabled and can connect with an update source or repository. If you're running a Windows Server OS on your machine, see [configure Windows Update settings](configure-wu-agent.md). - +Because Update Manager depends on your machine's OS package manager or update service, ensure that the Linux package manager or Windows Update client is enabled and can connect with an update source or repository. If you're running a Windows Server OS on your machine, see [Configure Windows Update settings](configure-wu-agent.md). ## Next steps-- [View updates for single machine](view-updates.md) -- [Deploy updates now (on-demand) for single machine](deploy-updates.md) ++- [View updates for a single machine](view-updates.md) +- [Deploy updates now (on-demand) for a single machine](deploy-updates.md) - [Schedule recurring updates](scheduled-patching.md)-- [Manage update settings via Portal](manage-update-settings.md)+- [Manage update settings via the portal](manage-update-settings.md) |
update-center | Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/troubleshoot.md | Title: Troubleshoot known issues with Azure Update Manager -description: The article provides details on the known issues and troubleshooting any problems with Azure Update Manager. +description: This article provides details on known issues and how to troubleshoot any problems with Azure Update Manager. Last updated 09/18/2023 -This article describes the errors that might occur when you deploy or use Update Manager, how to resolve them and the known issues and limitations of scheduled patching. --This article describes the errors that might occur when you deploy or use Update Manager, how to resolve them and the known issues and limitations of scheduled patching. +This article describes the errors that might occur when you deploy or use Azure Update Manager, how to resolve them, and the known issues and limitations of scheduled patching. ## General troubleshooting -The following troubleshooting steps apply to the Azure VMs related to the patch extension on Windows and Linux machines. +The following troubleshooting steps apply to the Azure virtual machines (VMs) related to the patch extension on Windows and Linux machines. ### Azure Linux VM -To verify if the Microsoft Azure Virtual Machine Agent (VM Agent) is running, has triggered appropriate actions on the machine, and the sequence number for the Auto-Patching request, check the agent log for more details in `/var/log/waagent.log`. Every Auto-Patching request has a unique sequence number associated with it on the machine. Look for a log similar to: `2021-01-20T16:57:00.607529Z INFO ExtHandler`. +To verify if the Microsoft Azure Virtual Machine agent (VM agent) is running and has triggered appropriate actions on the machine and the sequence number for the autopatching request, check the agent log for more information in `/var/log/waagent.log`. Every autopatching request has a unique sequence number associated with it on the machine. Look for a log similar to `2021-01-20T16:57:00.607529Z INFO ExtHandler`. -The package directory for the extension is `/var/lib/waagent/Microsoft.CPlat.Core.Edp.LinuxPatchExtension-<version>` and in the `/status` subfolder is a `<sequence number>.status` file, which includes a brief description of the actions performed during a single Auto-Patching request, and the status. It also includes a short list of errors that occurred while applying updates. +The package directory for the extension is `/var/lib/waagent/Microsoft.CPlat.Core.Edp.LinuxPatchExtension-<version>`. The `/status` subfolder has a `<sequence number>.status` file. It includes a brief description of the actions performed during a single autopatching request and the status. It also includes a short list of errors that occurred while applying updates. -To review the logs related to all actions performed by the extension, check for more details in `/var/log/azure/Microsoft.CPlat.Core.Edp.LinuxPatchExtension/`. It includes the following two log files of interest: +To review the logs related to all actions performed by the extension, check for more information in `/var/log/azure/Microsoft.CPlat.Core.Edp.LinuxPatchExtension/`. It includes the following two log files of interest: -* `<seq number>.core.log`: Contains details related to the patch actions, such as the patches assessed and installed on the machine, and any issues encountered in the process. -* `<Date and Time>_<Handler action>.ext.log`: There is a wrapper above the patch action, which is used to manage the extension and invoke specific patch operation. This log contains details about the wrapper. For Auto-Patching, the `<Date and Time>_Enable.ext.log` has details on whether the specific patch operation was invoked. +* `<seq number>.core.log`: Contains information related to the patch actions. This information includes patches assessed and installed on the machine and any problems encountered in the process. +* `<Date and Time>_<Handler action>.ext.log`: There's a wrapper above the patch action, which is used to manage the extension and invoke specific patch operation. This log contains information about the wrapper. For autopatching, the log `<Date and Time>_Enable.ext.log` has information on whether the specific patch operation was invoked. -### Azure Windows VM +### Azure Windows VM -To verify if the Microsoft Azure Virtual Machine Agent (VM Agent) is running, has triggered appropriate actions on the machine, and the sequence number for the Auto-Patching request, check the agent log for more details in `C:\WindowsAzure\Logs\AggregateStatus`. The package directory for the extension is `C:\Packages\Plugins\Microsoft.CPlat.Core.WindowsPatchExtension<version>`. +To verify if the VM agent is running and has triggered appropriate actions on the machine and the sequence number for the autopatching request, check the agent log for more information in `C:\WindowsAzure\Logs\AggregateStatus`. The package directory for the extension is `C:\Packages\Plugins\Microsoft.CPlat.Core.WindowsPatchExtension<version>`. -To review the logs related to all actions performed by the extension, check for more details in `C:\WindowsAzure\Logs\Plugins\Microsoft.CPlat.Core.WindowsPatchExtension<version>`. It includes the following two log files of interest: +To review the logs related to all actions performed by the extension, check for more information in `C:\WindowsAzure\Logs\Plugins\Microsoft.CPlat.Core.WindowsPatchExtension<version>`. It includes the following two log files of interest: -* `WindowsUpdateExtension.log`: Contains details related to the patch actions, such as the patches assessed and installed on the machine, and any issues encountered in the process. -* `CommandExecution.log`: There is a wrapper above the patch action, which is used to manage the extension and invoke specific patch operation. This log contains details about the wrapper. For Auto-Patching, the log has details on whether the specific patch operation was invoked. +* `WindowsUpdateExtension.log`: Contains information related to the patch actions. This information includes patches assessed and installed on the machine and any problems encountered in the process. +* `CommandExecution.log`: There's a wrapper above the patch action, which is used to manage the extension and invoke specific patch operation. This log contains information about the wrapper. For autopatching, the log has information on whether the specific patch operation was invoked. ## Unable to change the patch orchestration option to manual updates from automatic updates -### Issue +Here's the scenario. ++### Issue -Azure machine has the patch orchestration option as AutomaticByOS/Windows automatic updates and you are unable to change the patch orchestration to Manual Updates using Change update settings. +The Azure machine has the patch orchestration option as `AutomaticByOS/Windows` automatic updates and you're unable to change the patch orchestration to Manual Updates by using **Change update settings**. ### Resolution -If you don't want any patch installation to be orchestrated by Azure or aren't using custom patching solutions, you can change the patch orchestration option to **Customer Managed Schedules (Preview)** or **AutomaticByPlatform/ByPassPlatformSafetyChecksOnUserSchedule** and not associate a schedule/maintenance configuration to the machine. This will ensure that no patching is performed on the machine until you change it explicitly. For more information, see **scenario 2** in [User scenarios](prerequsite-for-schedule-patching.md#user-scenarios). +If you don't want any patch installation to be orchestrated by Azure or aren't using custom patching solutions, you can change the patch orchestration option to **Customer Managed Schedules (Preview)** or `AutomaticByPlatform` and `ByPassPlatformSafetyChecksOnUserSchedule` and not associate a schedule/maintenance configuration to the machine. This setting ensures that no patching is performed on the machine until you change it explicitly. For more information, see **Scenario 2** in [User scenarios](prerequsite-for-schedule-patching.md#user-scenarios). :::image type="content" source="./media/troubleshoot/known-issue-update-settings-failed.png" alt-text="Screenshot that shows a notification of failed update settings."::: ## Machine shows as "Not assessed" and shows an HRESULT exception +Here's the scenario. + ### Issue * You have machines that show as `Not assessed` under **Compliance**, and you see an exception message below them.-* You see an HRESULT error code in the portal. +* You see an `HRESULT` error code in the portal. ### Cause -The Update Agent (Windows Update Agent on Windows; the package manager for a Linux distribution) isn't configured correctly. Update Manager relies on the machine's Update Agent to provide the updates that are needed, the status of the patch, and the results of deployed patches. Without this information, Update Manager can't properly report on the patches that are needed or installed. +The Update Agent (Windows Update Agent on Windows and the package manager for a Linux distribution) isn't configured correctly. Update Manager relies on the machine's Update Agent to provide the updates that are needed, the status of the patch, and the results of deployed patches. Without this information, Update Manager can't properly report on the patches that are needed or installed. ### Resolution -Try to perform updates locally on the machine. If this operation fails, it typically means that there's an update agent configuration error. +Try to perform updates locally on the machine. If this operation fails, it typically means that there's an Update Agent configuration error. -This problem is frequently caused by network configuration and firewall issues. Use the following checks to correct the issue. +This issue is frequently caused by network configuration and firewall problems. Use the following checks to correct the issue: * For Linux, check the appropriate documentation to make sure you can reach the network endpoint of your package repository.--* For Windows, check your agent configuration as listed in [Updates aren't downloading from the intranet endpoint (WSUS/SCCM)](/windows/deployment/update/windows-update-troubleshooting#updates-arent-downloading-from-the-intranet-endpoint-wsussccm). +* For Windows, check your agent configuration as described in [Updates aren't downloading from the intranet endpoint (WSUS/SCCM)](/windows/deployment/update/windows-update-troubleshooting#updates-arent-downloading-from-the-intranet-endpoint-wsussccm). * If the machines are configured for Windows Update, make sure that you can reach the endpoints described in [Issues related to HTTP/proxy](/windows/deployment/update/windows-update-troubleshooting#issues-related-to-httpproxy). * If the machines are configured for Windows Server Update Services (WSUS), make sure that you can reach the WSUS server configured by the [WUServer registry key](/windows/deployment/update/waas-wu-settings). -If you see an HRESULT, double-click the exception displayed in red to see the entire exception message. Review the following table for potential resolutions or recommended actions. +If you see an `HRESULT` error code, double-click the exception displayed in red to see the entire exception message. Review the following table for potential resolutions or recommended actions. |Exception |Resolution or action | |||-|`Exception from HRESULT: 0x……C` | Search the relevant error code in [Windows update error code list](https://support.microsoft.com/help/938205/windows-update-error-code-list) to find additional details about the cause of the exception. | -|`0x8024402C`</br>`0x8024401C`</br>`0x8024402F` | These indicate network connectivity issues. Make sure your machine has network connectivity to Update Management. See the [network planning](../automation/update-management/plan-deployment.md#ports) section for a list of required ports and addresses. | -|`0x8024001E`| The update operation didn't complete because the service or system was shutting down.| +|`Exception from HRESULT: 0x……C` | Search the relevant error code in the [Windows Update error code list](https://support.microsoft.com/help/938205/windows-update-error-code-list) to find more information about the cause of the exception. | +|`0x8024402C`</br>`0x8024401C`</br>`0x8024402F` | Indicates network connectivity problems. Make sure your machine has network connectivity to Update Management. For a list of required ports and addresses, see the [Network planning](../automation/update-management/plan-deployment.md#ports) section. | +|`0x8024001E`| The update operation didn't finish because the service or system was shutting down.| |`0x8024002E`| Windows Update service is disabled.|-|`0x8024402C` | If you're using a WSUS server, make sure the registry values for `WUServer` and `WUStatusServer` under the `HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate` registry key specify the correct WSUS server. | -|`0x80072EE2`|There's a network connectivity issue or an issue in talking to a configured WSUS server. Check WSUS settings and make sure the service is accessible from the client.| -|`The service cannot be started, either because it is disabled or because it has no enabled devices associated with it. (Exception from HRESULT: 0x80070422)` | Make sure the Windows Update service (wuauserv) is running and not disabled. | -|`0x80070005`| An access denied error can be caused by any one of the following:<br> Infected computer<br> Windows Update settings not configured correctly<br> File permission error with %WinDir%\SoftwareDistribution folder<br> Insufficient disk space on the system drive (C:). -|Any other generic exception | Run a search on the internet for possible resolutions, and work with your local IT support. | +|`0x8024402C` | If you're using a WSUS server, make sure the registry values for `WUServer` and `WUStatusServer` under the `HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate` registry key specify the correct WSUS server. | +|`0x80072EE2`|There's a network connectivity problem or a problem in talking to a configured WSUS server. Check WSUS settings and make sure the service is accessible from the client.| +|`The service cannot be started, either because it is disabled or because it has no enabled devices associated with it. (Exception from HRESULT: 0x80070422)` | Make sure the Windows Update service (`wuauserv`) is running and not disabled. | +|`0x80070005`| An access denied error can be caused by any one of the following problems:<br> - Infected computer.<br> - Windows Update settings not configured correctly.<br> - File permission error with the `%WinDir%\SoftwareDistribution` folder.<br> - Insufficient disk space on the system drive (drive C). +|Any other generic exception | Run a search on the internet for possible resolutions and work with your local IT support. | -Reviewing the **%Windir%\Windowsupdate.log** file can also help you determine possible causes. For more information about how to read the log, see [How to read the Windowsupdate.log file](https://support.microsoft.com/help/902093/how-to-read-the-windowsupdate-log-file). +Reviewing the `%Windir%\Windowsupdate.log` file can also help you determine possible causes. For more information about how to read the log, see [Read the Windowsupdate.log file](https://support.microsoft.com/help/902093/how-to-read-the-windowsupdate-log-file). -You can also download and run the [Windows Update troubleshooter](https://support.microsoft.com/help/4027322/windows-update-troubleshooter) to check for any issues with Windows Update on the machine. +You can also download and run the [Windows Update troubleshooter](https://support.microsoft.com/help/4027322/windows-update-troubleshooter) to check for any problems with Windows Update on the machine. > [!NOTE] > The [Windows Update troubleshooter](https://support.microsoft.com/help/4027322/windows-update-troubleshooter) documentation indicates that it's for use on Windows clients, but it also works on Windows Server. -### Arc-enabled servers +### Azure Arc-enabled servers -For Arc-enabled servers, review the [troubleshoot VM extensions](../azure-arc/servers/troubleshoot-vm-extensions.md) article for general troubleshooting steps. +For Azure Arc-enabled servers, see [Troubleshoot VM extensions](../azure-arc/servers/troubleshoot-vm-extensions.md) for general troubleshooting steps. -To review the logs related to all actions performed by the extension, on Windows check for more details in `C:\ProgramData\GuestConfig\extension_Logs\Microsoft.SoftwareUpdateManagement\WindowsOsUpdateExtension`. It includes the following two log files of interest: +To review the logs related to all actions performed by the extension, on Windows, check for more information in `C:\ProgramData\GuestConfig\extension_Logs\Microsoft.SoftwareUpdateManagement\WindowsOsUpdateExtension`. It includes the following two log files of interest: -* `WindowsUpdateExtension.log`: Contains details related to the patch actions, such as the patches assessed and installed on the machine, and any issues encountered in the process. -* `cmd_execution_<numeric>_stdout.txt`: There is a wrapper above the patch action, which is used to manage the extension and invoke specific patch operation. This log contains details about the wrapper. For Auto-Patching, the log has details on whether the specific patch operation was invoked. +* `WindowsUpdateExtension.log`: Contains information related to the patch actions. This information includes the patches assessed and installed on the machine and any problems encountered in the process. +* `cmd_execution_<numeric>_stdout.txt`: There's a wrapper above the patch action. It's used to manage the extension and invoke specific patch operation. This log contains information about the wrapper. For autopatching, the log has information on whether the specific patch operation was invoked. * `cmd_excution_<numeric>_stderr.txt` ## Known issues in schedule patching -- For concurrent/conflicting schedule, only one schedule will be triggered. The other schedule will be triggered once a schedule is finished.-- If a machine is newly created, the schedule might have 15 minutes of schedule trigger delay in case of Azure VMs.-- Policy definition *Schedule recurring updates using Azure Update Manager* with version 1.0.0-preview successfully remediates resources however, it will always show them as non-compliant. The current value of the existence condition is a placeholder that will always evaluate to false.+- For a concurrent or conflicting schedule, only one schedule is triggered. The other schedule is triggered after a schedule is finished. +- If a machine is newly created, the schedule might have 15 minutes of schedule trigger delay in the case of Azure VMs. +- Policy definition **Schedule recurring updates using Azure Update Manager** with version 1.0.0-preview successfully remediates resources. However, it always shows them as noncompliant. The current value of the existence condition is a placeholder that always evaluates to false. -### Scenario: Unable to apply patches for the shutdown machines +### Unable to apply patches for the shutdown machines ++Here's the scenario. #### Issue -Patches aren’t getting applied for the machines that are in shutdown state, and you may also see that machines are losing their associated maintenance configurations/Schedules. +Patches aren't getting applied for the machines that are in shutdown state. You might also see that machines are losing their associated maintenance configurations or schedules. #### Cause The machines are in a shutdown state. -### Resolution: +### Resolution -Keep your machines turned on at least 15 minutes before the scheduled update. For more information, see, [Shut down machines](../virtual-machines/maintenance-configurations.md#shut-down-machines). +Keep your machines turned on at least 15 minutes before the scheduled update. For more information, see [Shut down machines](../virtual-machines/maintenance-configurations.md#shut-down-machines). +### Patch run failed with Maintenance window exceeded property showing true even if time remained -### Scenario: Patch run failed with Maintenance window exceeded property showing true even if time remained +Here's the scenario. #### Issue -When you view an update deployment in **Update History**, the property **Failed with Maintenance window exceeded** shows **true** even though enough time was left for execution. In this case, the one of the following is possible: +When you view an update deployment in **Update History**, the property **Failed with Maintenance window exceeded** shows **true** even though enough time was left for execution. In this case, one of the following problems is possible: * No updates are shown. * One or more updates are in a **Pending** state. When you view an update deployment in **Update History**, the property **Failed #### Cause -During an update deployment, it checks for maintenance window utilization at multiple steps. 10 minutes of the maintenance window are reserved for reboot at any point. Before getting a list of missing updates or downloading/installing any update (except Windows service pack updates), it checks to verify if there are 15 minutes + 10 minutes for reboot (that is, 25 mins left in the maintenance window). -For Windows service pack updates, we check for 20 minutes + 10 minutes for reboot (that is, 30 minutes). If the deployment doesn't have the sufficient left, it skips the scan/download/install of updates. The deployment run then checks if a reboot is needed and if there's ten minutes left in the maintenance window. If there is, the deployment triggers a reboot, otherwise the reboot is skipped. In such cases, the status is updated to **Failed**, and the Maintenance window exceeded property is updated to ***true**. For cases where the time left is less than 25 minutes, updates aren't scanned or attempted for installation. +During an update deployment, Maintenance window utilization is checked at multiple steps. Ten minutes of the Maintenance window are reserved for reboot at any point. Before the deployment gets a list of missing updates or downloads or installs any update (except Windows service pack updates), it checks to verify if there are 15 minutes + 10 minutes for reboot (that is, 25 minutes left in the Maintenance window). -More details can be found by reviewing the logs in the file path provided in the error message of the deployment run. +For Windows service pack updates, the deployment checks for 20 minutes + 10 minutes for reboot (that is, 30 minutes). If the deployment doesn't have the sufficient time left, it skips the scan/download/installation of updates. The deployment run then checks if a reboot is needed and if 10 minutes are left in the Maintenance window. If there are, the deployment triggers a reboot. Otherwise, the reboot is skipped. -#### Resolution --Setting a longer time range for maximum duration when triggering an [on-demand update deployment](deploy-updates.md) helps avoid the problem. +In such cases, the status is updated to **Failed**, and the **Maintenance window exceeded** property is updated to **true**. For cases where the time left is less than 25 minutes, updates aren't scanned or attempted for installation. +To find more information, review the logs in the file path provided in the error message of the deployment run. +#### Resolution +Set a longer time range for maximum duration when you're triggering an [on-demand update deployment](deploy-updates.md) to help avoid the problem. ## Next steps -* To learn more about Azure Update Manager, see the [Overview](overview.md). +* To learn more about Update Manager, see the [Overview](overview.md). * To view logged results from all your machines, see [Querying logs and results from Update Manager](query-logs.md). |
update-center | Update Manager Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/update-manager-faq.md | This FAQ is a list of commonly asked questions about Azure Update Manager. If y ## Fundamentals -### What are the benefits of using Azure Update Manager over Automation Update Management? +### What are the benefits of using Azure Update Manager? Azure Update Manager provides a SaaS solution to manage and govern software updates to Windows and Linux machines across Azure, on-premises, and multi-cloud environments. Following are the benefits of using Azure Update Azure Update Manager doesn't currently support Azure Lighthouse integration. ### Does Azure Update Manager support Azure Policy? -Yes, Azure Update Manager supports update features via policies. For more information, see [how to enable periodic assessment at scale using policy](periodic-assessment-at-scale.md) and [how to enable schedules on your machines at scale using Policy](scheduled-patching.md#onboarding-to-schedule-using-policy). +Yes, Azure Update Manager supports update features via policies. For more information, see [how to enable periodic assessment at scale using policy](periodic-assessment-at-scale.md) and [how to enable schedules on your machines at scale using Azure Policy](scheduled-patching.md#onboard-to-schedule-by-using-azure-policy). ### I have machines across multiple subscriptions in Automation Update Management. Is this scenario supported in Azure Update Manager? |
update-center | Updates Maintenance Schedules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/updates-maintenance-schedules.md | Title: Updates and maintenance in Azure Update Manager. -description: The article describes the updates and maintenance options available in Azure Update Manager. + Title: Updates and maintenance in Azure Update Manager +description: This article describes the updates and maintenance options available in Azure Update Manager. Last updated 09/18/2023 -> - For a seamless scheduled patching experience, we recommend that for all Azure VMs, you update the patch orchestration to **Customer Managed Schedules**. If you fail to update the patch orchestration, you can experience a disruption in business continuity because the schedules will fail to patch the VMs. [Learn more](prerequsite-for-schedule-patching.md). -> - For Arc-enabled servers, the updates and maintenance options such as Automatic VM Guest patching in Azure, Windows automatic updates and Hotpatching aren't supported. +> - For a seamless scheduled patching experience, we recommend that for all Azure virtual machines (VMs), you update the patch orchestration to **Customer Managed Schedules**. If you fail to update the patch orchestration, you can experience a disruption in business continuity because the schedules fail to patch the VMs. For more information, see [Configure schedule patching on Azure VMs to ensure business continuity](prerequsite-for-schedule-patching.md). +> - For Azure Arc-enabled servers, the updates and maintenance options such as automatic VM guest patching in Azure, Windows automatic updates, and hot patching aren't supported. +This article provides an overview of the various update and maintenance options available by Azure Update Manager. -This article provides an overview of the various update and maintenance options available by Azure Update Manager. +Update Manager provides you with the flexibility to take an immediate action or schedule an update within a defined maintenance window. It also supports new patching methods, such as [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md) and [hot patching](../automanage/automanage-hotpatch.md?context=%2fazure%2fvirtual-machines%2fcontext%2fcontext). -Azure Update Manager provides you the flexibility to take an immediate action or schedule an update within a defined maintenance window. It also supports new patching methods such as [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md), [Hotpatching](../automanage/automanage-hotpatch.md?context=%2fazure%2fvirtual-machines%2fcontext%2fcontext) and so on. ---## Update Now/One-time update --Azure Update Manager allows you to secure your machines immediately by installing updates on demand. To perform the on-demand updates, see [Check and install one time updates](deploy-updates.md#install-updates-on-a-single-vm). +## Update now/One-time update +Update Manager allows you to secure your machines immediately by installing updates on demand. To perform the on-demand updates, see [Check and install one-time updates](deploy-updates.md#install-updates-on-a-single-vm). ## Scheduled patching-You can create a schedule on a daily, weekly or hourly cadence as per your requirement, specify the machines that must be updated as part of the schedule, and the updates must be installed. The schedule will then automatically install the updates as per the specifications. -Azure Update Manager uses maintenance control schedule instead of creating its own schedules. Maintenance control enables customers to manage platform updates. For more information, see the [Maintenance control documentation](/azure/virtual-machines/maintenance-control). -Start using [scheduled patching](scheduled-patching.md) to create and save recurring deployment schedules. +You can create a schedule on a daily, weekly, or hourly cadence according to your requirement. You can specify the machines that must be updated as part of the schedule and the updates that must be installed. The schedule then automatically installs the updates according to the specifications. -> [!NOTE] -> Patch orchestration property for Azure machines should be set to **Customer Managed Schedules** as it is a prerequisite for scheduled patching. For more information, see the [list of prerequisites](../update-center/scheduled-patching.md#prerequisites-for-scheduled-patching). +Update Manager uses a maintenance control schedule instead of creating its own schedules. Maintenance control enables customers to manage platform updates. For more information, see the [Maintenance control documentation](/azure/virtual-machines/maintenance-control). +Start by using [scheduled patching](scheduled-patching.md) to create and save recurring deployment schedules. -## Automatic VM Guest patching in Azure +> [!NOTE] +> The patch orchestration property for Azure machines should be set to **Customer Managed Schedules** because it's a prerequisite for scheduled patching. For more information, see the [list of prerequisites](../update-center/scheduled-patching.md#prerequisites-for-scheduled-patching). -This mode of patching lets the Azure platform automatically download and install all the security and critical updates on your machines every month and apply them on your machines following the availability-first principles. For more information, see [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md). +## Automatic VM guest patching in Azure -In **Azure Update Manager** home page, go to **Update Settings** blade, select Patch orchestration as **Azure Managed - Safe Deployment** value to enable this VM property. +This mode of patching lets the Azure platform automatically download and install all the security and critical updates on your machines every month and apply them on your machines following the availability-first principles. For more information, see [Automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md). +On the **Azure Update Manager** home page, go to **Update Settings** and select **Patch orchestration** as the **Azure Managed - Safe Deployment** value to enable this VM property. ## Windows automatic updates-This mode of patching allows operating system to automatically install updates as soon as they are available. It uses the VM property that is enabled by setting the patch orchestration to OS orchestrated/Automatic by OS. ++This mode of patching allows the operating system to automatically install updates as soon as they're available. It uses the VM property that's enabled by setting the patch orchestration to OS orchestrated/automatic by the OS. ## Hot patching -Hot patching allows you to install updates on supported Windows Server Azure Edition virtual machines without requiring a reboot after installation. It reduces the number of reboots required on your mission critical application workloads running on Windows Server. For more information, see [Hot patch for new virtual machines](../automanage/automanage-hotpatch.md) +Hot patching allows you to install updates on supported Windows Server Azure Edition VMs without requiring a reboot after installation. It reduces the number of reboots required on your mission-critical application workloads running on Windows Server. For more information, see [Hot patch for new virtual machines](../automanage/automanage-hotpatch.md). -Hotpatching property is available as a setting in Update Manager which you can enable by using Update settings flow. Refer to detailed instructions [here](manage-update-settings.md#configure-settings-on-a-single-vm) +The hot patching property is available as a setting in Update Manager. You can enable it by using the Update settings flow. For detailed instructions, see [Manage update configuration settings](manage-update-settings.md#configure-settings-on-a-single-vm). ## Next steps -* To view update assessment and deployment logs generated by Azure Update Manager, see [query logs](query-logs.md). -* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager. +* To view update assessment and deployment logs generated by Update Manager, see [Query logs](query-logs.md). +* To troubleshoot issues, see [Troubleshoot Update Manager](troubleshoot.md). |
update-center | View Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/view-updates.md | Title: Check update compliance in Azure Update Manager -description: The article details how to use Azure Update Manager in the Azure portal to assess update compliance for supported machines. +description: This article explains how to use Azure Update Manager in the Azure portal to assess update compliance for supported machines. Last updated 09/18/2023 -This article details how to check the status of available updates on a single VM or multiple VMs using Update Manager. +This article explains how to check the status of available updates on a single VM or multiple VMs by using Azure Update Manager. +## Check updates on a single VM -## Check updates on single VM +You can check the updates from the **Overview** or **Machines** pane on the **Update Manager** page or from the selected VM. ->[!NOTE] -> You can check the updates from the Overview or Machines blade in Update Manager page or from the selected VM. --# [From Overview blade](#tab/singlevm-overview) +# [From the Overview pane](#tab/singlevm-overview) 1. Sign in to the [Azure portal](https://portal.azure.com). -1. In **Azure Update Manager**, **Overview**, select your **Subscription** to view all your machines and select **Check for updates**. +1. On the **Azure Update Manager** | **Overview** page, select your subscription to view all your machines, and then select **Check for updates**. -1. In **Select resources and check for updates**, choose the machine for which you want to check the updates and select **Check for updates**. +1. On the **Select resources and check for updates** pane, choose the machine that you want to check for updates, and then select **Check for updates**. An assessment is performed and a notification appears as a confirmation. - :::image type="content" source="./media/view-updates/check-updates-overview-inline.png" alt-text="Screenshot of checking updates from Overview." lightbox="./media/view-updates/check-updates-overview-expanded.png"::: + :::image type="content" source="./media/view-updates/check-updates-overview-inline.png" alt-text="Screenshot that shows checking updates from Overview." lightbox="./media/view-updates/check-updates-overview-expanded.png"::: - The **Update status of machines**, **Patch orchestration configuration** of Azure virtual machines, and **Total installation runs** tiles are refreshed and display the results. -+ The **Update status of machines**, **Patch orchestration configuration** of Azure VMs, and **Total installation runs** tiles are refreshed and display the results. -# [From Machines blade](#tab/singlevm-machines) +# [From the Machines pane](#tab/singlevm-machines) 1. Sign in to the [Azure portal](https://portal.azure.com). -1. In **Azure Update Manager**, **Machines**, select your **Subscription** to view all your machines. +1. On the **Azure Update Manager** | **Machines** page, select your subscription to view all your machines. -1. Select your machine from the checkbox and select **Check for updates**, **Assess now** or alternatively, you can select your machine, in **Updates**, select **Assess updates**, and in **Trigger assess now**, select **OK**. -- An assessment is performed and a notification appears first that the *Assessment is in progress* and after a successful assessment, you will see *Assessment successful* else, you will see the notification *Assessment Failed*. For more information, see [update assessment scan](assessment-options.md#update-assessment-scan). +1. Select the checkbox for your machine, and then select **Check for updates** > **Assess now**. Alternatively, you can select your machine and in **Updates**, select **Assess updates**. In **Trigger assess now**, select **OK**. + An assessment is performed and a notification appears first that says **Assessment is in progress**. After a successful assessment, you see **Assessment successful**. Otherwise, you see the notification **Assessment Failed**. For more information, see [Update assessment scan](assessment-options.md#update-assessment-scan). # [From a selected VM](#tab/singlevm-home) -1. Select your virtual machine and the **virtual machines | Updates** page opens. +1. Select your virtual machine to open the **Virtual machines | Updates** page. 1. Under **Operations**, select **Updates**.-1. In **Updates**, select **Go to Updates using Update Manager**. +1. On the **Updates** pane, select **Go to Updates using Update Manager**. - :::image type="content" source="./media/view-updates/resources-check-updates.png" alt-text="Screenshot showing selection of updates from Home page."::: + :::image type="content" source="./media/view-updates/resources-check-updates.png" alt-text="Screenshot that shows selection of updates from the home page."::: -1. In **Updates**, select **Check for updates**, in **Trigger assess now**, select **OK**. +1. On the **Updates** page, select **Check for updates**. In **Trigger assess now**, select **OK**. - An assessment is performed and a notification appears first that the *Assessment is in progress* and after a successful assessment, you will see *Assessment successful* else, you will see the notification *Assessment Failed*. + An assessment is performed and a notification says **Assessment is in progress**. After the assessment, you see **Assessment successful** or **Assessment failed**. - :::image type="content" source="./media/view-updates/check-updates-home-inline.png" alt-text="Screenshot of status after checking updates." lightbox="./media/view-updates/check-updates-home-expanded.png"::: + :::image type="content" source="./media/view-updates/check-updates-home-inline.png" alt-text="Screenshot that shows the status after checking updates." lightbox="./media/view-updates/check-updates-home-expanded.png"::: - For more information, see [update assessment scan](assessment-options.md#update-assessment-scan). - - + For more information, see [Update assessment scan](assessment-options.md#update-assessment-scan). ++ ## Check updates at scale -To check the updates on your machines at scale, follow these steps: +To check the updates on your machines at scale, follow these steps. ->[!NOTE] -> You can check the updates from the **Overview** or **Machines** blade. +You can check the updates from the **Overview** or **Machines** pane. -# [From Overview blade](#tab/at-scale-overview) +# [From the Overview pane](#tab/at-scale-overview) 1. Sign in to the [Azure portal](https://portal.azure.com). -1. In **Azure Update Manager**, **Overview**, select your **Subscription** to view all your machines and select **Check for updates**. +1. On the **Azure Update Manager** | **Overview** page, select your subscription to view all your machines and select **Check for updates**. -1. In **Select resources and check for updates**, choose your machines for which you want to check the updates and select **Check for updates**. +1. On the **Select resources and check for updates** pane, choose the machines that you want to check for updates and select **Check for updates**. - An assessment is performed and a notification appears as a confirmation. + An assessment is performed and a notification appears as a confirmation. The **Update status of machines**, **Patch orchestration configuration** of Azure virtual machines, and **Total installation runs** tiles are refreshed and display the results. --# [From Machines blade](#tab/at-scale-machines) +# [From the Machines pane](#tab/at-scale-machines) 1. Sign in to the [Azure portal](https://portal.azure.com). -1. In **Azure Update Manager**, **Machines**, select your **Subscription** to view all your machines. +1. On the **Azure Update Manager** | **Machines** page, select your subscription to view all your machines. -1. Select the **Select all** to choose all your machines and select **Check for updates**. +1. Choose **Select all** to select all your machines, and then select **Check for updates**. 1. Select **Assess now** to perform the assessment. - A notification appears when the operation is initiated and completed. After a successful scan, the **Update Manager | Machines** page is refreshed to display the updates. + A notification appears when the operation is initiated and finished. After a successful scan, the **Update Manager | Machines** page is refreshed to display the updates. > [!NOTE]-> In Azure Update Manager, you can initiate a software updates compliance scan on the machine to get the current list of operating system (guest) updates including the security and critical updates. On Windows, the software update scan is performed by the Windows Update Agent. On Linux, the software update scan is performed using OVAL-compatible tools to test for the presence of vulnerabilities based on the OVAL Definitions for that platform, which is retrieved from a local or remote repository. -+> In Update Manager, you can initiate a software updates compliance scan on the machine to get the current list of operating system (guest) updates, including the security and critical updates. On Windows, the Windows Update Agent performs the software update scan. On Linux, the software update scan is performed by using OVAL-compatible tools to test for the presence of vulnerabilities based on the OVAL definitions for that platform, which are retrieved from a local or remote repository. - ## Next steps -* Learn about deploying updates on your machines to maintain security compliance by reading [deploy updates](deploy-updates.md). -* To view the update assessment and deployment logs generated by Update Manager, see [query logs](query-logs.md). -* To troubleshoot issues, see [Troubleshoot](troubleshoot.md) Azure Update Manager. +* To learn how to deploy updates on your machines to maintain security compliance, see [Deploy updates](deploy-updates.md). +* To view the update assessment and deployment logs generated by Update Manager, see [Query logs](query-logs.md). +* To troubleshoot issues, see [Troubleshoot Update Manager](troubleshoot.md). |
update-center | Whats Upcoming | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/whats-upcoming.md | Title: What's upcoming in Azure Update Manager -description: Learn about what's upcoming and updates in the Update Manager service. +description: Learn about what's upcoming and updates in Azure Update Manager. Last updated 09/20/2023 # What are the upcoming features in Azure Update Manager -The primary [what's New in Azure Update Manager](whats-new.md) contains updates of feature releases and this article lists all the upcoming features. +The article [What's new in Azure Update Manager](whats-new.md) contains updates of feature releases. This article lists all the upcoming features for Azure Update Manager. -## Expanded support for Operating system and VM images - -Expanded support for [specialized images](../virtual-machines/linux/imaging.md#specialized-images), VMs created by Azure Migrate, Azure Backup, Azure Site Recovery, and marketplace images are upcoming in Q3, CY 2023. Until then, we recommend that you continue using [Automation update management](../automation/update-management/overview.md) for these images. [Learn more](support-matrix.md#supported-operating-systems) +## Expanded support for operating system and VM images -## Prescript and postscript +Expanded support for [specialized images](../virtual-machines/linux/imaging.md#specialized-images), virtual machines created by Azure Migrate, Azure Backup, and Azure Site Recovery, and Azure Marketplace images are upcoming in the third quarter of 2023. Until then, we recommend that you continue using [Automation Update Management](../automation/update-management/overview.md) for these images. For more information, see [Support matrix for Update Manager](support-matrix.md#supported-operating-systems). -The ability to execute Azure Automation runbook scripts before or after deploying scheduled updates to machines will be available by Q4, CY2023. +Update Manager will be declared generally available soon. ++## Prescript and postscript ++The ability to execute Azure Automation runbook scripts before or after deploying scheduled updates to machines will be available by the fourth quarter of 2023. ## Next steps -- [Learn more](support-matrix.md) about supported regions.+For more information about supported regions, see [Support matrix for Update Manager](support-matrix.md). |
update-center | Workbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/workbooks.md | Title: An overview of Workbooks + Title: An overview of workbooks description: This article provides information on how workbooks provide a flexible canvas for data analysis and the creation of rich visual reports. Last updated 09/18/2023-# About Workbooks +# About workbooks **Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. -Workbooks help you to create visual reports that help in data analysis. This article describes the various features that Workbooks offer in Update Manager. +Workbooks help you to create visual reports that help in data analysis. This article describes the various features that workbooks offer in Azure Update Manager. ## Key benefits-- Provides a canvas for data analysis and creation of visual reports-- Access specific metrics from within the reports++- Use as a canvas for data analysis and creation of visual reports. +- Access specific metrics from within the reports. - Create interactive reports with various kinds of visualizations. - Create, share, and pin workbooks to the dashboard.-- Combine text, log queries, metrics, and parameters to make rich visual reports. +- Combine text, log queries, metrics, and parameters to make rich visual reports. ## The gallery The gallery lists all the saved workbooks and templates for your workspace. You can easily organize, sort, and manage workbooks of all types. - :::image type="content" source="./media/workbooks/workbooks-gallery.png" alt-text="Screenshot of workbooks gallery."::: + :::image type="content" source="./media/workbooks/workbooks-gallery.png" alt-text="Screenshot that shows the workbooks gallery."::: -- It comprises of the following four tabs that help you organize workbook types:+The following four tabs help you organize workbook types. | Tab | Description | |||- | All | Shows the top four items for workbooks, public templates, and my templates. Workbooks are sorted by modified date, so you'll see the most recent eight modified workbooks.| - | Workbooks | Shows the list of all the available workbooks that you created or are shared with you. | - | Public Templates | Shows the list of all the available ready to use, get started functional workbook templates published by Microsoft. Grouped by category. | - | My Templates | Shows the list of all the available deployed workbook templates that you created or are shared with you. Grouped by category. | + | **All** | Shows the top four items for **Workbooks**, **Public Templates**, and **My Templates**. Workbooks are sorted by modified date, so you see the most recent eight modified workbooks.| + | **Workbooks** | Shows the list of all the available workbooks that you created or are shared with you. | + | **Public Templates** | Shows the list of all the available ready-to-use, get-started functional workbook templates published by Microsoft. Grouped by category. | + | **My Templates** | Shows the list of all the available deployed workbook templates that you created or are shared with you. Grouped by category. | -- In the **Quick start** tile, you can create new workbooks.- :::image type="content" source="./media/workbooks/quickstart-workbooks.png" alt-text="Screenshot of creating a new workbook using Quick start."::: +- On the **Quick start** tile, you can create new workbooks. -- In the **Recently modified** tile, you can view and edit the workbooks.+ :::image type="content" source="./media/workbooks/quickstart-workbooks.png" alt-text="Screenshot that shows creating a new workbook by using Quick start."::: -- In the **Azure Update Manager** tile, you can view the following summary:- :::image type="content" source="./media/workbooks/workbooks-summary-inline.png" alt-text="Screenshot of workbook summary." lightbox="./media/workbooks/workbooks-summary-expanded.png"::: +- On the **Azure Update Manager** tile, you can view the following summary. +- + :::image type="content" source="./media/workbooks/workbooks-summary-inline.png" alt-text="Screenshot that shows a workbook summary." lightbox="./media/workbooks/workbooks-summary-expanded.png"::: + - **Machines overall status and configurations**: Provides the status of all machines in a specific subscription. - - **Machines overall status and configurations** - provides the status of all machines in a specific subscription. -- :::image type="content" source="./media/workbooks/workbooks-machine-overall-status-inline.png" alt-text="Screenshot of the overall status and configuration of machines." lightbox="./media/workbooks/workbooks-machine-overall-status-expanded.png"::: + :::image type="content" source="./media/workbooks/workbooks-machine-overall-status-inline.png" alt-text="Screenshot that shows the overall status and configuration of machines." lightbox="./media/workbooks/workbooks-machine-overall-status-expanded.png"::: - - **Updates data overview** - provides a summary of machines that have no updates, assessments and reboot needed including the pending Windows and Linux updates by classification and by machine count. + - **Updates data overview**: Provides a summary of machines that have no updates, assessments, and reboot needed, including the pending Windows and Linux updates by classification and by machine count. - :::image type="content" source="./media/workbooks/workbooks-machines-updates-status-inline.png" alt-text="Screenshot of summary of machines that no updates, and assessments." lightbox="./media/workbooks/workbooks-machines-updates-status-expanded.png"::: + :::image type="content" source="./media/workbooks/workbooks-machines-updates-status-inline.png" alt-text="Screenshot that shows a summary of machines that have no updates and assessments needed." lightbox="./media/workbooks/workbooks-machines-updates-status-expanded.png"::: - - **Schedules/maintenance configurations** - provides a summary of schedules, maintenance configurations and list of machines attached to the schedule. You can also access the maintenance configuration overview page from this section. + - **Schedules/Maintenance configurations**: Provides a summary of schedules, maintenance configurations, and a list of machines attached to the schedule. You can also access the maintenance configuration overview page from this section. - :::image type="content" source="./media/workbooks/workbooks-schedules-maintenance-inline.png" alt-text="Screenshot of summary of schedules and maintenance configurations." lightbox="./media/workbooks/workbooks-schedules-maintenance-expanded.png"::: + :::image type="content" source="./media/workbooks/workbooks-schedules-maintenance-inline.png" alt-text="Screenshot that shows a summary of schedules and maintenance configurations." lightbox="./media/workbooks/workbooks-schedules-maintenance-expanded.png"::: ++ - **History of installation runs**: Provides a history of machines and maintenance runs. - - **History of installation runs** - provides a history of machines and maintenance runs. - :::image type="content" source="./media/workbooks/workbooks-history-installation-inline.png" alt-text="Screenshot of history of installation runs." lightbox="./media/workbooks/workbooks-history-installation-expanded.png"::: + :::image type="content" source="./media/workbooks/workbooks-history-installation-inline.png" alt-text="Screenshot that shows a history of installation runs." lightbox="./media/workbooks/workbooks-history-installation-expanded.png"::: For information on how to use the workbooks for customized reporting, see [Edit a workbook](manage-workbooks.md#edit-a-workbook). ## Next steps - Learn about deploying updates to your machines to maintain security compliance by reading [deploy updates](deploy-updates.md) + To learn how to deploy updates to your machines to maintain security compliance, see [Deploy updates](deploy-updates.md). |
virtual-desktop | Add Session Hosts Host Pool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/add-session-hosts-host-pool.md | Here's how to generate a registration key using the Azure portal. 1. Select **Download** to download a text file containing the registration key, or copy the registration key to your clipboard to use later. You can also retrieve the registration key later by returning to the host pool overview. -# [Azure CLI](#tab/cli) --Here's how to generate a registration key using the [desktopvirtualization](/cli/azure/desktopvirtualization) extension for Azure CLI. --> [!IMPORTANT] -> In the following examples, you'll need to change the `<placeholder>` values for your own. --2. Use the `az desktopvirtualization workspace update` command with the following example to generate a registration key that is valid for 24 hours. -- ```azurecli - az desktopvirtualization hostpool update \ - --name <Name> \ - --resource-group <ResourceGroupName> \ - --registration-info expiration-time=$(date -d '+24 hours' --iso-8601=ns) registration-token-operation="Update" - ``` --3. Get the registration key and copy it to your clipboard to use later. You can also retrieve the registration key later by running this command again anytime while the registration key is valid. -- ```azurecli - az desktopvirtualization hostpool retrieve-registration-token \ - --name <Name> \ - --resource-group <ResourceGroupName> \ - --query token --output tsv - ``` # [Azure PowerShell](#tab/powershell) Here's how to generate a registration key using the [Az.DesktopVirtualization](/ (Get-AzWvdHostPoolRegistrationToken @parameters).Token ``` +# [Azure CLI](#tab/cli) ++Here's how to generate a registration key using the [desktopvirtualization](/cli/azure/desktopvirtualization) extension for Azure CLI. ++> [!IMPORTANT] +> In the following examples, you'll need to change the `<placeholder>` values for your own. ++2. Use the `az desktopvirtualization workspace update` command with the following example to generate a registration key that is valid for 24 hours. ++ ```azurecli + az desktopvirtualization hostpool update \ + --name <Name> \ + --resource-group <ResourceGroupName> \ + --registration-info expiration-time=$(date -d '+24 hours' --iso-8601=ns) registration-token-operation="Update" + ``` ++3. Get the registration key and copy it to your clipboard to use later. You can also retrieve the registration key later by running this command again anytime while the registration key is valid. ++ ```azurecli + az desktopvirtualization hostpool retrieve-registration-token \ + --name <Name> \ + --resource-group <ResourceGroupName> \ + --query token --output tsv + ``` + ## Create and register session hosts with the Azure Virtual Desktop service |
virtual-desktop | Create Application Group Workspace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-application-group-workspace.md | Here's how to create an application group using the Azure portal. 1. Once the application group has been created, select **Go to resource** to go to the overview of your new application group, then select **Properties** to view its properties. -# [Azure CLI](#tab/cli) --Here's how to create an application group using the [desktopvirtualization](/cli/azure/desktopvirtualization) extension for Azure CLI. --> [!IMPORTANT] -> In the following examples, you'll need to change the `<placeholder>` values for your own. ---2. Get the resource ID of the host pool you want to create an application group for and store it in a variable by running the following command: -- ```azurecli - hostPoolArmPath=$(az desktopvirtualization hostpool show \ - --name <Name> \ - --resource-group <ResourceGroupName> \ - --query [id] \ - --output tsv) - ``` --3. Use the `az desktopvirtualization applicationgroup create` command with the following examples to create an application group. For more information, see the [az desktopvirtualization applicationgroup Azure CLI reference](/cli/azure/desktopvirtualization/applicationgroup). -- 1. To create a Desktop application group in the Azure region UK South, run the following command: -- ```azurecli - az desktopvirtualization applicationgroup create \ - --name <Name> \ - --resource-group <ResourceGroupName> \ - --application-group-type Desktop \ - --host-pool-arm-path $hostPoolArmPath \ - --location uksouth - ``` -- 1. To create a RemoteApp application group in the Azure region UK South, run the following command. You can only create a RemoteApp application group with a pooled host pool. -- ```azurecli - az desktopvirtualization applicationgroup create \ - --name <Name> \ - --resource-group <ResourceGroupName> \ - --application-group-type RemoteApp \ - --host-pool-arm-path $hostPoolArmPath \ - --location uksouth - ``` --4. You can view the properties of your new application group by running the following command: -- ```azurecli - az desktopvirtualization applicationgroup show --name <Name> --resource-group <ResourceGroupName> - ``` - # [Azure PowerShell](#tab/powershell) Here's how to create an application group using the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization) PowerShell module. Here's how to create an application group using the [Az.DesktopVirtualization](/ Get-AzWvdApplicationGroup -Name <Name> -ResourceGroupName <ResourceGroupName> | FL * ``` +# [Azure CLI](#tab/cli) ++Here's how to create an application group using the [desktopvirtualization](/cli/azure/desktopvirtualization) extension for Azure CLI. ++> [!IMPORTANT] +> In the following examples, you'll need to change the `<placeholder>` values for your own. +++2. Get the resource ID of the host pool you want to create an application group for and store it in a variable by running the following command: ++ ```azurecli + hostPoolArmPath=$(az desktopvirtualization hostpool show \ + --name <Name> \ + --resource-group <ResourceGroupName> \ + --query [id] \ + --output tsv) + ``` ++3. Use the `az desktopvirtualization applicationgroup create` command with the following examples to create an application group. For more information, see the [az desktopvirtualization applicationgroup Azure CLI reference](/cli/azure/desktopvirtualization/applicationgroup). ++ 1. To create a Desktop application group in the Azure region UK South, run the following command: ++ ```azurecli + az desktopvirtualization applicationgroup create \ + --name <Name> \ + --resource-group <ResourceGroupName> \ + --application-group-type Desktop \ + --host-pool-arm-path $hostPoolArmPath \ + --location uksouth + ``` ++ 1. To create a RemoteApp application group in the Azure region UK South, run the following command. You can only create a RemoteApp application group with a pooled host pool. ++ ```azurecli + az desktopvirtualization applicationgroup create \ + --name <Name> \ + --resource-group <ResourceGroupName> \ + --application-group-type RemoteApp \ + --host-pool-arm-path $hostPoolArmPath \ + --location uksouth + ``` ++4. You can view the properties of your new application group by running the following command: ++ ```azurecli + az desktopvirtualization applicationgroup show --name <Name> --resource-group <ResourceGroupName> + ``` + ## Create a workspace Here's how to create a workspace using the Azure portal. 1. Once the workspace has been created, select **Go to resource** to go to the overview of your new workspace, then select **Properties** to view its properties. -# [Azure CLI](#tab/cli) -Here's how to create a workspace using the [desktopvirtualization](/cli/azure/desktopvirtualization) extension for Azure CLI. +# [Azure PowerShell](#tab/powershell) ++Here's how to create a workspace using the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization) PowerShell module. > [!IMPORTANT] > In the following examples, you'll need to change the `<placeholder>` values for your own. -2. Use the `az desktopvirtualization workspace create` command with the following example to create a workspace. More parameters are available, such as to register existing application groups. For more information, see the [az desktopvirtualization workspace Azure CLI reference](/cli/azure/desktopvirtualization/workspace). +2. Use the `New-AzWvdWorkspace` cmdlet with the following example to create a workspace. More parameters are available, such as to register existing application groups. For more information, see the [New-AzWvdWorkspace PowerShell reference](/powershell/module/az.desktopvirtualization/new-azwvdworkspace). - ```azurecli - az desktopvirtualization workspace create --name <Name> --resource-group <ResourceGroupName> + ```azurepowershell + New-AzWvdWorkspace -Name <Name> -ResourceGroupName <ResourceGroupName> ``` 3. You can view the properties of your new workspace by running the following command: - ```azurecli - az desktopvirtualization workspace show --name <Name> --resource-group <ResourceGroupName> + ```azurepowershell + Get-AzWvdWorkspace -Name <Name> -ResourceGroupName <ResourceGroupName> | FL * ``` -# [Azure PowerShell](#tab/powershell) +# [Azure CLI](#tab/cli) -Here's how to create a workspace using the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization) PowerShell module. +Here's how to create a workspace using the [desktopvirtualization](/cli/azure/desktopvirtualization) extension for Azure CLI. > [!IMPORTANT] > In the following examples, you'll need to change the `<placeholder>` values for your own. -2. Use the `New-AzWvdWorkspace` cmdlet with the following example to create a workspace. More parameters are available, such as to register existing application groups. For more information, see the [New-AzWvdWorkspace PowerShell reference](/powershell/module/az.desktopvirtualization/new-azwvdworkspace). +2. Use the `az desktopvirtualization workspace create` command with the following example to create a workspace. More parameters are available, such as to register existing application groups. For more information, see the [az desktopvirtualization workspace Azure CLI reference](/cli/azure/desktopvirtualization/workspace). - ```azurepowershell - New-AzWvdWorkspace -Name <Name> -ResourceGroupName <ResourceGroupName> + ```azurecli + az desktopvirtualization workspace create --name <Name> --resource-group <ResourceGroupName> ``` 3. You can view the properties of your new workspace by running the following command: - ```azurepowershell - Get-AzWvdWorkspace -Name <Name> -ResourceGroupName <ResourceGroupName> | FL * + ```azurecli + az desktopvirtualization workspace show --name <Name> --resource-group <ResourceGroupName> ``` Here's how to add an application group to a workspace using the Azure portal. 1. Select **Select**. The application group will be added to the workspace. ++# [Azure PowerShell](#tab/powershell) ++Here's how to add an application group to a workspace using the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization) PowerShell module. ++> [!IMPORTANT] +> In the following examples, you'll need to change the `<placeholder>` values for your own. +++2. Use the `Update-AzWvdWorkspace` cmdlet with the following example to add an application group to a workspace: ++ ```azurepowershell + # Get the resource ID of the application group you want to add to the workspace + $appGroupPath = (Get-AzWvdApplicationGroup -Name <Name -ResourceGroupName <ResourceGroupName>).Id ++ # Add the application group to the workspace + Update-AzWvdWorkspace -Name <Name> -ResourceGroupName <ResourceGroupName> -ApplicationGroupReference $appGroupPath + ``` ++3. You can view the properties of your workspace by running the following command. The key **ApplicationGroupReference** contains an array of the application groups added to the workspace. ++ ```azurepowershell + Get-AzWvdWorkspace -Name <Name> -ResourceGroupName <ResourceGroupName> | FL * + ``` + # [Azure CLI](#tab/cli) Here's how to add an application group to a workspace using the [desktopvirtualization](/cli/azure/desktopvirtualization) extension for Azure CLI. Here's how to add an application group to a workspace using the [desktopvirtuali --resource-group <ResourceGroupName> ``` -# [Azure PowerShell](#tab/powershell) --Here's how to add an application group to a workspace using the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization) PowerShell module. --> [!IMPORTANT] -> In the following examples, you'll need to change the `<placeholder>` values for your own. ---2. Use the `Update-AzWvdWorkspace` cmdlet with the following example to add an application group to a workspace: -- ```azurepowershell - # Get the resource ID of the application group you want to add to the workspace - $appGroupPath = (Get-AzWvdApplicationGroup -Name <Name -ResourceGroupName <ResourceGroupName>).Id -- # Add the application group to the workspace - Update-AzWvdWorkspace -Name <Name> -ResourceGroupName <ResourceGroupName> -ApplicationGroupReference $appGroupPath - ``` --3. You can view the properties of your workspace by running the following command. The key **ApplicationGroupReference** contains an array of the application groups added to the workspace. -- ```azurepowershell - Get-AzWvdWorkspace -Name <Name> -ResourceGroupName <ResourceGroupName> | FL * - ``` - ## Assign users to an application group Here's how to assign users or user groups to an application group to a workspace 1. Finish by selecting **Select**. ++# [Azure PowerShell](#tab/powershell) ++Here's how to assign users or user groups to an application group to a workspace using [Az.Resources](/powershell/module/az.resources) PowerShell module. ++> [!IMPORTANT] +> In the following examples, you'll need to change the `<placeholder>` values for your own. +++2. Use the `New-AzRoleAssignment` cmdlet with the following examples to assign users or user groups to an application group. ++ 1. To assign users to the application group, run the following commands: ++ ```azurepowershell + $parameters = @{ + SignInName = '<UserPrincipalName>' + ResourceName = '<ApplicationGroupName>' + ResourceGroupName = '<ResourceGroupName>' + RoleDefinitionName = 'Desktop Virtualization User' + ResourceType = 'Microsoft.DesktopVirtualization/applicationGroups' + } + + New-AzRoleAssignment @parameters + ``` ++ 1. To assign user groups to the application group, run the following commands: + + ```azurepowershell + # Get the object ID of the user group you want to assign to the application group + $userGroupId = (Get-AzADGroup -DisplayName "<UserGroupName>").Id ++ # Assign users to the application group + $parameters = @{ + ObjectId = $userGroupId + ResourceName = '<ApplicationGroupName>' + ResourceGroupName = '<ResourceGroupName>' + RoleDefinitionName = 'Desktop Virtualization User' + ResourceType = 'Microsoft.DesktopVirtualization/applicationGroups' + } + + New-AzRoleAssignment @parameters + ``` + # [Azure CLI](#tab/cli) Here's how to assign users or user groups to an application group to a workspace using the [role](/cli/azure/role/assignment) extension for Azure CLI. Here's how to assign users or user groups to an application group to a workspace --scope $appGroupPath ``` -# [Azure PowerShell](#tab/powershell) --Here's how to assign users or user groups to an application group to a workspace using [Az.Resources](/powershell/module/az.resources) PowerShell module. --> [!IMPORTANT] -> In the following examples, you'll need to change the `<placeholder>` values for your own. ---2. Use the `New-AzRoleAssignment` cmdlet with the following examples to assign users or user groups to an application group. -- 1. To assign users to the application group, run the following commands: -- ```azurepowershell - $parameters = @{ - SignInName = '<UserPrincipalName>' - ResourceName = '<ApplicationGroupName>' - ResourceGroupName = '<ResourceGroupName>' - RoleDefinitionName = 'Desktop Virtualization User' - ResourceType = 'Microsoft.DesktopVirtualization/applicationGroups' - } - - New-AzRoleAssignment @parameters - ``` -- 1. To assign user groups to the application group, run the following commands: - - ```azurepowershell - # Get the object ID of the user group you want to assign to the application group - $userGroupId = (Get-AzADGroup -DisplayName "<UserGroupName>").Id -- # Assign users to the application group - $parameters = @{ - ObjectId = $userGroupId - ResourceName = '<ApplicationGroupName>' - ResourceGroupName = '<ResourceGroupName>' - RoleDefinitionName = 'Desktop Virtualization User' - ResourceType = 'Microsoft.DesktopVirtualization/applicationGroups' - } - - New-AzRoleAssignment @parameters - ``` - ## Next steps |
virtual-desktop | Create Host Pool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-host-pool.md | In addition, you'll need: - Don't disable [Windows Remote Management](/windows/win32/winrm/about-windows-remote-management) (WinRM) when creating session hosts using the Azure portal, as it's required by [PowerShell DSC](/powershell/dsc/overview). -# [Azure CLI](#tab/cli) +# [Azure PowerShell](#tab/powershell) In addition, you'll need: In addition, you'll need: Alternatively you can assign the [Contributor](../role-based-access-control/built-in-roles.md#contributor) RBAC role to create all of these resource types. -- If you want to use Azure CLI locally, see [Use Azure CLI and Azure PowerShell with Azure Virtual Desktop](cli-powershell.md) to make sure you have the [desktopvirtualization](/cli/azure/desktopvirtualization) Azure CLI extension installed. Alternatively, use the [Azure Cloud Shell](../cloud-shell/overview.md).+- If you want to use Azure PowerShell locally, see [Use Azure CLI and Azure PowerShell with Azure Virtual Desktop](cli-powershell.md) to make sure you have the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization) PowerShell module installed. Alternatively, use the [Azure Cloud Shell](../cloud-shell/overview.md). > [!IMPORTANT] > If you want to create Azure Active Directory-joined session hosts, we only support this using the Azure portal with the Azure Virtual Desktop service. -# [Azure PowerShell](#tab/powershell) +# [Azure CLI](#tab/cli) In addition, you'll need: In addition, you'll need: Alternatively you can assign the [Contributor](../role-based-access-control/built-in-roles.md#contributor) RBAC role to create all of these resource types. -- If you want to use Azure PowerShell locally, see [Use Azure CLI and Azure PowerShell with Azure Virtual Desktop](cli-powershell.md) to make sure you have the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization) PowerShell module installed. Alternatively, use the [Azure Cloud Shell](../cloud-shell/overview.md).+- If you want to use Azure CLI locally, see [Use Azure CLI and Azure PowerShell with Azure Virtual Desktop](cli-powershell.md) to make sure you have the [desktopvirtualization](/cli/azure/desktopvirtualization) Azure CLI extension installed. Alternatively, use the [Azure Cloud Shell](../cloud-shell/overview.md). > [!IMPORTANT] > If you want to create Azure Active Directory-joined session hosts, we only support this using the Azure portal with the Azure Virtual Desktop service. If you also added session hosts to your host pool, there's some extra configurat [!INCLUDE [include-session-hosts-post-deployment](includes/include-session-hosts-post-deployment.md)] -# [Azure CLI](#tab/cli) --Here's how to create a host pool using the [desktopvirtualization](/cli/azure/desktopvirtualization) extension for Azure CLI. The following examples show you how to create a pooled host pool and a personal host pool. --> [!IMPORTANT] -> In the following examples, you'll need to change the `<placeholder>` values for your own. --2. Use the `az desktopvirtualization hostpool create` command with the following examples to create a host pool. More parameters are available; for more information, see the [az desktopvirtualization hostpool Azure CLI reference](/cli/azure/desktopvirtualization/hostpool). -- 1. To create a pooled host pool using the *breadth-first* [load-balancing algorithm](host-pool-load-balancing.md) and *Desktop* as the preferred [app group type](environment-setup.md#app-groups), run the following command: - - ```azurecli - az desktopvirtualization hostpool create \ - --name <Name> \ - --resource-group <ResourceGroupName> \ - --host-pool-type Pooled \ - --load-balancer-type BreadthFirst \ - --preferred-app-group-type Desktop \ - --max-session-limit <value> \ - --location <AzureRegion> - ``` -- 1. To create a personal host pool using the *Automatic* assignment type, run the following command: - - ```azurecli - az desktopvirtualization hostpool create \ - --name <Name> \ - --resource-group <ResourceGroupName> \ - --host-pool-type Personal \ - --load-balancer-type Persistent \ - --preferred-app-group-type Desktop \ - --personal-desktop-assignment-type Automatic \ - --location <AzureRegion> - ``` --3. You can view the properties of your new host pool by running the following command: -- ```azurecli - az desktopvirtualization hostpool show --name <Name> --resource-group <ResourceGroupName> - ``` - # [Azure PowerShell](#tab/powershell) Here's how to create a host pool using the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization) PowerShell module. The following examples show you how to create a pooled host pool and a personal host pool. Here's how to create a host pool using the [Az.DesktopVirtualization](/powershel Get-AzWvdHostPool -Name <Name> -ResourceGroupName <ResourceGroupName> | FL * ``` +# [Azure CLI](#tab/cli) ++Here's how to create a host pool using the [desktopvirtualization](/cli/azure/desktopvirtualization) extension for Azure CLI. The following examples show you how to create a pooled host pool and a personal host pool. ++> [!IMPORTANT] +> In the following examples, you'll need to change the `<placeholder>` values for your own. ++2. Use the `az desktopvirtualization hostpool create` command with the following examples to create a host pool. More parameters are available; for more information, see the [az desktopvirtualization hostpool Azure CLI reference](/cli/azure/desktopvirtualization/hostpool). ++ 1. To create a pooled host pool using the *breadth-first* [load-balancing algorithm](host-pool-load-balancing.md) and *Desktop* as the preferred [app group type](environment-setup.md#app-groups), run the following command: + + ```azurecli + az desktopvirtualization hostpool create \ + --name <Name> \ + --resource-group <ResourceGroupName> \ + --host-pool-type Pooled \ + --load-balancer-type BreadthFirst \ + --preferred-app-group-type Desktop \ + --max-session-limit <value> \ + --location <AzureRegion> + ``` ++ 1. To create a personal host pool using the *Automatic* assignment type, run the following command: + + ```azurecli + az desktopvirtualization hostpool create \ + --name <Name> \ + --resource-group <ResourceGroupName> \ + --host-pool-type Personal \ + --load-balancer-type Persistent \ + --preferred-app-group-type Desktop \ + --personal-desktop-assignment-type Automatic \ + --location <AzureRegion> + ``` ++3. You can view the properties of your new host pool by running the following command: ++ ```azurecli + az desktopvirtualization hostpool show --name <Name> --resource-group <ResourceGroupName> + ``` + ## Next steps If you didn't complete the optional sections when creating a host pool, you'll s - [Enable diagnostics settings](diagnostics-log-analytics.md). -# [Azure CLI](#tab/cli) + +# [Azure PowerShell](#tab/powershell) Now that you've created a host pool, you'll still need to do the following tasks: Now that you've created a host pool, you'll still need to do the following tasks - [Add session hosts to a host pool](add-session-hosts-host-pool.md). - [Enable diagnostics settings](diagnostics-log-analytics.md).- -# [Azure PowerShell](#tab/powershell) ++# [Azure CLI](#tab/cli) Now that you've created a host pool, you'll still need to do the following tasks: |
virtual-desktop | Private Link Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/private-link-overview.md | The following table summarizes the private endpoints you need to create: You can either share these private endpoints across your network topology or you can isolate your virtual networks so that each has their own private endpoint to the host pool or workspace. -The following high-level diagram shows how Private Link securely connects a local client to the Azure Virtual Desktop service: +The following high-level diagram shows how Private Link securely connects a local client to the Azure Virtual Desktop service. For more detailed information about client connections, see [Client connection sequence](#client-connection-sequence). :::image type="content" source="media/private-link-diagram.png" alt-text="A high-level diagram that shows Private Link connecting a local client to the Azure Virtual Desktop service."::: When adding Private Link with Azure Virtual Desktop, you have the following opti - Clients use public routes while session host VMs use private routes. - Both clients and session host VMs use public routes. Private Link isn't used. +For connections to a workspace, except the workspace used for initial feed discovery (global sub-resource), the following table details the outcome of each scenario: ++| Configuration | Outcome | +|--|--| +| Public access **enabled** from all networks | Workspace feed requests are **allowed** from *public* routes.<br /><br />Workspace feed requests are **allowed** from *private* routes. | +| Public access **disabled** from all networks | Workspace feed requests are **denied** from *public* routes.<br /><br />Workspace feed requests are **allowed** from *private* routes. | ++With the [reverse connect transport](network-connectivity.md#reverse-connect-transport), there are two network connections for connections to host pools: the client to the gateway, and the session host to the gateway. In addition to enabling or disabling public access for both connections, you can also choose to enable public access for clients connecting to the gateway and only allow private access for session hosts connecting to the gateway. The following table details the outcome of each scenario: ++| Configuration | Outcome | +|--|--| +| Public access **enabled** from all networks | Remote sessions are **allowed** when either the client or session host is using a *public* route.<br /><br />Remote sessions are **allowed** when either the client or session host is using a *private* route. | +| Public access **disabled** from all networks | Remote sessions are **denied** when either the client or session host is using a *public* route.<br /><br />Remote sessions are **allowed** when both the client and session host are using a *private* route. | +| Public access **enabled** for client networks, but **disabled** for session host networks | Remote sessions are **denied** if the session host is using a *public* route, regardless of the route the client is using.<br /><br />Remote sessions are **allowed** as long as the session host is using a *private* route, regardless of the route the client is using. | + > [!IMPORTANT] > - A private endpoint to the global sub-resource of any workspace controls the shared fully qualified domain name (FQDN) for initial feed discovery. This in turn enables feed discovery for all workspaces. Because the workspace connected to the private endpoint is so important, deleting it will cause all feed discovery processes to stop working. We recommend you create an unused placeholder workspace for the global sub-resource. >+> - You can't control access to the workspace used for the initial feed discovery (global sub-resource). If you configure this workspace to only allow private access, the setting is ignored. This workspace is always accessible from public routes. +> > - If you intend to restrict network ports from either the user client devices or your session host VMs to the private endpoints, you will need to allow traffic across the entire TCP dynamic port range of 1 - 65535 to the private endpoint for the host pool resource using the *connection* sub-resource. The entire TCP dynamic port range is needed because port mapping is used to all global gateways through the single private endpoint IP address corresponding to the *connection* sub-resource. If you restrict ports to the private endpoint, your users may not be able to connect successfully to Azure Virtual Desktop. +## Client connection sequence ++When a user connects to Azure Virtual Desktop over Private Link, and Azure Virtual Desktop is configured to only allow client connections from private routes, the connection sequence is as follows: ++1. With a supported client, a user subscribes to a workspace. The user's device queries DNS for the address `rdweb.wvd.microsoft.com` (or the corresponding address for other Azure environments). ++1. Your private DNS zone for **privatelink-global.wvd.microsoft.com** returns the private IP address for the initial feed discovery (global sub-resource). ++1. For each workspace in the feed, a DNS query is made for the address `<workspaceId>.privatelink.wvd.microsoft.com`. ++1. Your private DNS zone for **privatelink.wvd.microsoft.com** returns the private IP address for the workspace feed download. ++1. When connecting a remote session, the `.rdp` file that comes from the workspace feed download contains the Remote Desktop gateway address. A DNS query is made for the address `<hostpooId>.afdfp-rdgateway.wvd.microsoft.com`. ++1. Your private DNS zone for **privatelink.wvd.microsoft.com** returns the private IP address for the Remote Desktop gateway to use for the host pool providing the remote session. + ## Limitations Private Link with Azure Virtual Desktop has the following limitations: |
virtual-desktop | Private Link Setup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/private-link-setup.md | In order to use Private Link with Azure Virtual Desktop, you need the following ## Enable the feature -To use Private Link with Azure Virtual Desktop, first you need to re-register the *Microsoft.DesktopVirtualization* resource provider and register the *Azure Virtual Desktop Private Link* feature on your Azure subscription. +To use Private Link with Azure Virtual Desktop, you need to re-register the *Microsoft.DesktopVirtualization* resource provider on each each subscription you want to use Private Link with Azure Virtual Desktop. > [!IMPORTANT]-> You need to re-register the resource provider and register the feature for each subscription you want to use Private Link with Azure Virtual Desktop. +> For Azure US Gov and Azure operated by 21Vianet, you also need to register the feature for each subscription. -### Re-register the resource provider +### Register the feature (Azure US Gov and Azure operated by 21Vianet only) -To re-register the *Microsoft.DesktopVirtualization* resource provider: +To register the *Azure Virtual Desktop Private Link* feature: 1. Sign in to the [Azure portal](https://portal.azure.com). 1. In the search bar, enter **Subscriptions** and select the matching service entry. -1. Select the name of your subscription, then in the section **Settings**, select **Resource providers**. +1. Select the name of your subscription, then in the **Settings** section, select **Preview features**. -1. Search for and select **Microsoft.DesktopVirtualization**, then select **Re-register**. +1. Select the drop-down list for the filter **Type** and set it to **Microsoft.DesktopVirtualization**. -1. Verify that the status of *Microsoft.DesktopVirtualization* is **Registered**. +1. Select **Azure Virtual Desktop Private Link**, then select **Register**. -### Register the feature +### Re-register the resource provider -To register the *Azure Virtual Desktop Private Link* feature: +To re-register the *Microsoft.DesktopVirtualization* resource provider: 1. Sign in to the [Azure portal](https://portal.azure.com). 1. In the search bar, enter **Subscriptions** and select the matching service entry. -1. Select the name of your subscription, then in the **Settings** section, select **Preview features**. +1. Select the name of your subscription, then in the section **Settings**, select **Resource providers**. -1. Select the drop-down list for the filter **Type** and set it to **Microsoft.DesktopVirtualization**. +1. Search for and select **Microsoft.DesktopVirtualization**, then select **Re-register**. -1. Select **Azure Virtual Desktop Private Link**, then select **Register**. +1. Verify that the status of *Microsoft.DesktopVirtualization* is **Registered**. ## Create private endpoints Here's how to create a private endpoint for the *connection* sub-resource for co 1. Select **Create** to create the private endpoint for the connection sub-resource. -# [Azure CLI](#tab/cli) --Here's how to create a private endpoint for the *connection* sub-resource used for connections to a host pool using the [network](/cli/azure/network) and [desktopvirtualization](/cli/azure/desktopvirtualization) extensions for Azure CLI. --> [!IMPORTANT] -> In the following examples, you'll need to change the `<placeholder>` values for your own. ---2. Create a Private Link service connection and the private endpoint for a host pool with the connection sub-resource by running the commands in one of the following examples. -- 1. To create a private endpoint with a dynamically allocated IP address: - - ```azurecli - # Specify the Azure region. This must be the same region as your virtual network and session hosts. - location=<Location> - - # Get the resource ID of the host pool - hostPoolId=$(az desktopvirtualization hostpool show \ - --name <HostPoolName> \ - --resource-group <ResourceGroupName> \ - --query [id] \ - --output tsv) - - # Create a service connection and the private endpoint - az network private-endpoint create \ - --name <PrivateEndpointName> \ - --resource-group <ResourceGroupName> \ - --location $location \ - --vnet-name <VNetName> \ - --subnet <SubnetName> \ - --connection-name <ConnectionName> \ - --private-connection-resource-id $hostPoolId \ - --group-id connection \ - --output table - ``` -- 1. To create a private endpoint with statically allocated IP addresses: - - ```azurecli - # Specify the Azure region. This must be the same region as your virtual network and session hosts. - location=<Location> - - # Get the resource ID of the host pool - hostPoolId=$(az desktopvirtualization hostpool show \ - --name <HostPoolName> \ - --resource-group <ResourceGroupName> \ - --query [id] \ - --output tsv) - - # Store each private endpoint IP configuration in a variable - ip1={name:ipconfig1,group-id:connection,member-name:broker,private-ip-address:<IPAddress>} - ip2={name:ipconfig2,group-id:connection,member-name:diagnostics,private-ip-address:<IPAddress>} - ip3={name:ipconfig3,group-id:connection,member-name:gateway-ring-map,private-ip-address:<IPAddress>} - ip4={name:ipconfig4,group-id:connection,member-name:web,private-ip-address:<IPAddress>} - - # Create a service connection and the private endpoint - az network private-endpoint create \ - --name <PrivateEndpointName> \ - --resource-group <ResourceGroupName> \ - --location $location \ - --vnet-name <VNetName> \ - --subnet <SubnetName> \ - --connection-name <ConnectionName> \ - --private-connection-resource-id $hostPoolId \ - --group-id connection \ - --ip-configs [$ip1,$ip2,$ip3,$ip4] \ - --output table - ``` -- Your output should be similar to the following. Check that the value for **ProvisioningState** is **Succeeded**. -- ```output - CustomNetworkInterfaceName Location Name ProvisioningState ResourceGroup - - - -- - - uksouth endpoint-hp01 Succeeded privatelink - ``` --3. You need to [configure DNS for your private endpoint](../private-link/private-endpoint-dns.md) to resolve the DNS name of the private endpoint in the virtual network. The private DNS zone name is `privatelink.wvd.microsoft.com`. For the steps to create and configure the private DNS zone with Azure CLI, see [Configure the private DNS zone](../private-link/create-private-endpoint-cli.md#configure-the-private-dns-zone). # [Azure PowerShell](#tab/powershell) Here's how to create a private endpoint for the *connection* sub-resource used f 5. You need to [configure DNS for your private endpoint](../private-link/private-endpoint-dns.md) to resolve the DNS name of the private endpoint in the virtual network. The private DNS zone name is `privatelink.wvd.microsoft.com`. For the steps to create and configure the private DNS zone with Azure PowerShell, see [Configure the private DNS zone](../private-link/create-private-endpoint-powershell.md#configure-the-private-dns-zone). ---> [!IMPORTANT] -> You need to create a private endpoint for the connection sub-resource for each host pool you want to use with Private Link. ----### Feed download --To create a private endpoint for the *feed* sub-resource for a workspace, select the relevant tab for your scenario and follow the steps. --# [Portal](#tab/portal) --1. From the Azure Virtual Desktop overview, select **Workspaces**, then select the name of the workspace for which you want to create a *feed* sub-resource. --1. From the workspace overview, select **Networking**, then **Private endpoint connections**, and finally **New private endpoint**. --1. On the **Basics** tab, complete the following information: -- | Parameter | Value/Description | - |--|--| - | Subscription | Select the subscription you want to create the private endpoint in from the drop-down list. | - | Resource group | This automatically defaults to the same resource group as your workspace for the private endpoint, but you can also select an alternative existing one from the drop-down list, or create a new one. | - | Name | Enter a name for the new private endpoint. | - | Network interface name | The network interface name fills in automatically based on the name you gave the private endpoint, but you can also specify a different name. | - | Region | This automatically defaults to the same Azure region as the workspace and is where the private endpoint is deployed. This must be the same region as your virtual network. | -- Once you've completed this tab, select **Next: Resource**. --1. On the **Resource** tab, validate the values for *Subscription*, *Resource type*, and *Resource*, then for **Target sub-resource**, select **feed**. Once you've completed this tab, select **Next: Virtual Network**. --1. On the **Virtual Network** tab, complete the following information: -- | Parameter | Value/Description | - |--|--| - | Virtual network | Select the virtual network you want to create the private endpoint in from the drop-down list. | - | Subnet | Select the subnet of the virtual network you want to create the private endpoint in from the drop-down list. | - | Network policy for private endpoints | Select **edit** if you want to choose a subnet network policy. For more information, see [Manage network policies for private endpoints](../private-link/disable-private-endpoint-network-policy.md). | - | Private IP configuration | Select **Dynamically allocate IP address** or **Statically allocate IP address**. The address space is from the subnet you selected.<br /><br />If you choose to statically allocate IP addresses, you need to fill in the **Name** and **Private IP** for each listed member. | - | Application security group | *Optional*: select an existing application security group for the private endpoint from the drop-down list, or create a new one. You can also add one later. | -- Once you've completed this tab, select **Next: DNS**. --1. On the **DNS** tab, choose whether you want to use [Azure Private DNS Zone](../dns/private-dns-privatednszone.md) by selecting **Yes** or **No** for **Integrate with private DNS zone**. If you select **Yes**, select the subscription and resource group in which to create the private DNS zone `privatelink.wvd.microsoft.com`. For more information, see [Azure Private Endpoint DNS configuration](../private-link/private-endpoint-dns.md). -- Once you've completed this tab, select **Next: Tags**. --1. *Optional*: On the **Tags** tab, you can enter any [name/value pairs](../azure-resource-manager/management/tag-resources.md) you need, then select **Next: Review + create**. +# [Azure CLI](#tab/cli) -1. On the **Review + create** tab, ensure validation passes and review the information that is used during deployment. +Here's how to create a private endpoint for the *connection* sub-resource used for connections to a host pool using the [network](/cli/azure/network) and [desktopvirtualization](/cli/azure/desktopvirtualization) extensions for Azure CLI. -1. Select **Create** to create the private endpoint for the feed sub-resource. +> [!IMPORTANT] +> In the following examples, you'll need to change the `<placeholder>` values for your own. -# [Azure CLI](#tab/cli) -1. In the same CLI session, create a Private Link service connection and the private endpoint for a workspace with the feed sub-resource by running the following commands. +2. Create a Private Link service connection and the private endpoint for a host pool with the connection sub-resource by running the commands in one of the following examples. 1. To create a private endpoint with a dynamically allocated IP address: ```azurecli- # Specify the Azure region. This must be the same region as your virtual network. + # Specify the Azure region. This must be the same region as your virtual network and session hosts. location=<Location> - # Get the resource ID of the workspace - workspaceId=$(az desktopvirtualization workspace show \ - --name <WorkspaceName> \ + # Get the resource ID of the host pool + hostPoolId=$(az desktopvirtualization hostpool show \ + --name <HostPoolName> \ --resource-group <ResourceGroupName> \ --query [id] \ --output tsv) To create a private endpoint for the *feed* sub-resource for a workspace, select --vnet-name <VNetName> \ --subnet <SubnetName> \ --connection-name <ConnectionName> \- --private-connection-resource-id $workspaceId \ - --group-id feed \ + --private-connection-resource-id $hostPoolId \ + --group-id connection \ --output table ``` 1. To create a private endpoint with statically allocated IP addresses: ```azurecli- # Specify the Azure region. This must be the same region as your virtual network. + # Specify the Azure region. This must be the same region as your virtual network and session hosts. location=<Location> - # Get the resource ID of the workspace - workspaceId=$(az desktopvirtualization workspace show \ - --name <WorkspaceName> \ + # Get the resource ID of the host pool + hostPoolId=$(az desktopvirtualization hostpool show \ + --name <HostPoolName> \ --resource-group <ResourceGroupName> \ --query [id] \ --output tsv) # Store each private endpoint IP configuration in a variable- ip1={name:ipconfig1,group-id:feed,member-name:web-r1,private-ip-address:<IPAddress>} - ip2={name:ipconfig2,group-id:feed,member-name:web-r0,private-ip-address:<IPAddress>} + ip1={name:ipconfig1,group-id:connection,member-name:broker,private-ip-address:<IPAddress>} + ip2={name:ipconfig2,group-id:connection,member-name:diagnostics,private-ip-address:<IPAddress>} + ip3={name:ipconfig3,group-id:connection,member-name:gateway-ring-map,private-ip-address:<IPAddress>} + ip4={name:ipconfig4,group-id:connection,member-name:web,private-ip-address:<IPAddress>} # Create a service connection and the private endpoint az network private-endpoint create \ To create a private endpoint for the *feed* sub-resource for a workspace, select --vnet-name <VNetName> \ --subnet <SubnetName> \ --connection-name <ConnectionName> \- --private-connection-resource-id $workspaceId \ - --group-id feed \ - --ip-configs [$ip1,$ip2] \ + --private-connection-resource-id $hostPoolId \ + --group-id connection \ + --ip-configs [$ip1,$ip2,$ip3,$ip4] \ --output table ``` To create a private endpoint for the *feed* sub-resource for a workspace, select ```output CustomNetworkInterfaceName Location Name ProvisioningState ResourceGroup - - -- - - uksouth endpoint-ws01 Succeeded privatelink + uksouth endpoint-hp01 Succeeded privatelink ``` -1. You need to [configure DNS for your private endpoint](../private-link/private-endpoint-dns.md) to resolve the DNS name of the private endpoint in the virtual network. The private DNS zone name is `privatelink.wvd.microsoft.com`. For the steps to create and configure the private DNS zone with Azure CLI, see [Configure the private DNS zone](../private-link/create-private-endpoint-cli.md#configure-the-private-dns-zone). +3. You need to [configure DNS for your private endpoint](../private-link/private-endpoint-dns.md) to resolve the DNS name of the private endpoint in the virtual network. The private DNS zone name is `privatelink.wvd.microsoft.com`. For the steps to create and configure the private DNS zone with Azure CLI, see [Configure the private DNS zone](../private-link/create-private-endpoint-cli.md#configure-the-private-dns-zone). ++++> [!IMPORTANT] +> You need to create a private endpoint for the connection sub-resource for each host pool you want to use with Private Link. ++++### Feed download ++To create a private endpoint for the *feed* sub-resource for a workspace, select the relevant tab for your scenario and follow the steps. ++# [Portal](#tab/portal) ++1. From the Azure Virtual Desktop overview, select **Workspaces**, then select the name of the workspace for which you want to create a *feed* sub-resource. ++1. From the workspace overview, select **Networking**, then **Private endpoint connections**, and finally **New private endpoint**. ++1. On the **Basics** tab, complete the following information: ++ | Parameter | Value/Description | + |--|--| + | Subscription | Select the subscription you want to create the private endpoint in from the drop-down list. | + | Resource group | This automatically defaults to the same resource group as your workspace for the private endpoint, but you can also select an alternative existing one from the drop-down list, or create a new one. | + | Name | Enter a name for the new private endpoint. | + | Network interface name | The network interface name fills in automatically based on the name you gave the private endpoint, but you can also specify a different name. | + | Region | This automatically defaults to the same Azure region as the workspace and is where the private endpoint is deployed. This must be the same region as your virtual network. | ++ Once you've completed this tab, select **Next: Resource**. ++1. On the **Resource** tab, validate the values for *Subscription*, *Resource type*, and *Resource*, then for **Target sub-resource**, select **feed**. Once you've completed this tab, select **Next: Virtual Network**. ++1. On the **Virtual Network** tab, complete the following information: ++ | Parameter | Value/Description | + |--|--| + | Virtual network | Select the virtual network you want to create the private endpoint in from the drop-down list. | + | Subnet | Select the subnet of the virtual network you want to create the private endpoint in from the drop-down list. | + | Network policy for private endpoints | Select **edit** if you want to choose a subnet network policy. For more information, see [Manage network policies for private endpoints](../private-link/disable-private-endpoint-network-policy.md). | + | Private IP configuration | Select **Dynamically allocate IP address** or **Statically allocate IP address**. The address space is from the subnet you selected.<br /><br />If you choose to statically allocate IP addresses, you need to fill in the **Name** and **Private IP** for each listed member. | + | Application security group | *Optional*: select an existing application security group for the private endpoint from the drop-down list, or create a new one. You can also add one later. | ++ Once you've completed this tab, select **Next: DNS**. ++1. On the **DNS** tab, choose whether you want to use [Azure Private DNS Zone](../dns/private-dns-privatednszone.md) by selecting **Yes** or **No** for **Integrate with private DNS zone**. If you select **Yes**, select the subscription and resource group in which to create the private DNS zone `privatelink.wvd.microsoft.com`. For more information, see [Azure Private Endpoint DNS configuration](../private-link/private-endpoint-dns.md). ++ Once you've completed this tab, select **Next: Tags**. ++1. *Optional*: On the **Tags** tab, you can enter any [name/value pairs](../azure-resource-manager/management/tag-resources.md) you need, then select **Next: Review + create**. ++1. On the **Review + create** tab, ensure validation passes and review the information that is used during deployment. ++1. Select **Create** to create the private endpoint for the feed sub-resource. + # [Azure PowerShell](#tab/powershell) To create a private endpoint for the *feed* sub-resource for a workspace, select 1. You need to [configure DNS for your private endpoint](../private-link/private-endpoint-dns.md) to resolve the DNS name of the private endpoint in the virtual network. The private DNS zone name is `privatelink.wvd.microsoft.com`. For the steps to create and configure the private DNS zone with Azure PowerShell, see [Configure the private DNS zone](../private-link/create-private-endpoint-powershell.md#configure-the-private-dns-zone). ---> [!IMPORTANT] -> You need to a create private endpoint for the feed sub-resource for each workspace you want to use with Private Link. --### Initial feed discovery --To create a private endpoint for the *global* sub-resource used for the initial feed discovery, select the relevant tab for your scenario and follow the steps. --> [!IMPORTANT] -> - Only create one private endpoint for the *global* sub-resource for all your Azure Virtual Desktop deployments. -> -> - A private endpoint to the global sub-resource of any workspace controls the shared fully qualified domain name (FQDN) for initial feed discovery. This in turn enables feed discovery for all workspaces. Because the workspace connected to the private endpoint is so important, deleting it will cause all feed discovery processes to stop working. We recommend you create an unused placeholder workspace for the global sub-resource. --# [Portal](#tab/portal) --1. From the Azure Virtual Desktop overview, select **Workspaces**, then select the name of a workspace you want to use for the global sub-resource. -- 1. *Optional*: Instead, create a placeholder workspace to terminate the global endpoint by following the instructions to [Create a workspace](create-application-group-workspace.md?tabs=portal#create-a-workspace). --1. From the workspace overview, select **Networking**, then **Private endpoint connections**, and finally **New private endpoint**. --1. On the **Basics** tab, complete the following information: -- | Parameter | Value/Description | - |--|--| - | Subscription | Select the subscription you want to create the private endpoint in from the drop-down list. | - | Resource group | This automatically defaults to the same resource group as your workspace for the private endpoint, but you can also select an alternative existing one from the drop-down list, or create a new one. | - | Name | Enter a name for the new private endpoint. | - | Network interface name | The network interface name fills in automatically based on the name you gave the private endpoint, but you can also specify a different name. | - | Region | This automatically defaults to the same Azure region as the workspace and is where the private endpoint will be deployed. This must be the same region as your virtual network. | -- Once you've completed this tab, select **Next: Resource**. --1. On the **Resource** tab, validate the values for *Subscription*, *Resource type*, and *Resource*, then for **Target sub-resource**, select **global**. Once you've completed this tab, select **Next: Virtual Network**. --1. On the **Virtual Network** tab, complete the following information: -- | Parameter | Value/Description | - |--|--| - | Virtual network | Select the virtual network you want to create the private endpoint in from the drop-down list. | - | Subnet | Select the subnet of the virtual network you want to create the private endpoint in from the drop-down list. | - | Network policy for private endpoints | Select **edit** if you want to choose a subnet network policy. For more information, see [Manage network policies for private endpoints](../private-link/disable-private-endpoint-network-policy.md). | - | Private IP configuration | Select **Dynamically allocate IP address** or **Statically allocate IP address**. The address space is from the subnet you selected.<br /><br />If you choose to statically allocate IP addresses, you need to fill in the **Name** and **Private IP** for each listed member. | - | Application security group | *Optional*: select an existing application security group for the private endpoint from the drop-down list, or create a new one. You can also add one later. | -- Once you've completed this tab, select **Next: DNS**. --1. On the **DNS** tab, choose whether you want to use [Azure Private DNS Zone](../dns/private-dns-privatednszone.md) by selecting **Yes** or **No** for **Integrate with private DNS zone**. If you select **Yes**, select the subscription and resource group in which to create the private DNS zone `privatelink-global.wvd.microsoft.com`. For more information, see [Azure Private Endpoint DNS configuration](../private-link/private-endpoint-dns.md). -- Once you've completed this tab, select **Next: Tags**. --1. *Optional*: On the **Tags** tab, you can enter any [name/value pairs](../azure-resource-manager/management/tag-resources.md) you need, then select **Next: Review + create**. --1. On the **Review + create** tab, ensure validation passes and review the information that is used during deployment. --1. Select **Create** to create the private endpoint for the global sub-resource. - # [Azure CLI](#tab/cli) -1. *Optional*: Create a placeholder workspace to terminate the global endpoint by following the instructions to [Create a workspace](create-application-group-workspace.md?tabs=cli#create-a-workspace). --1. In the same CLI session, create a Private Link service connection and the private endpoint for the workspace with the global sub-resource by running the following commands: +1. In the same CLI session, create a Private Link service connection and the private endpoint for a workspace with the feed sub-resource by running the following commands. 1. To create a private endpoint with a dynamically allocated IP address: To create a private endpoint for the *global* sub-resource used for the initial --subnet <SubnetName> \ --connection-name <ConnectionName> \ --private-connection-resource-id $workspaceId \- --group-id global \ + --group-id feed \ --output table ``` To create a private endpoint for the *global* sub-resource used for the initial --output tsv) # Store each private endpoint IP configuration in a variable- ip={name:ipconfig,group-id:global,member-name:web,private-ip-address:<IPAddress>} + ip1={name:ipconfig1,group-id:feed,member-name:web-r1,private-ip-address:<IPAddress>} + ip2={name:ipconfig2,group-id:feed,member-name:web-r0,private-ip-address:<IPAddress>} # Create a service connection and the private endpoint az network private-endpoint create \ To create a private endpoint for the *global* sub-resource used for the initial --subnet <SubnetName> \ --connection-name <ConnectionName> \ --private-connection-resource-id $workspaceId \- --group-id global \ - --ip-config $ip \ + --group-id feed \ + --ip-configs [$ip1,$ip2] \ --output table ``` To create a private endpoint for the *global* sub-resource used for the initial ```output CustomNetworkInterfaceName Location Name ProvisioningState ResourceGroup - - -- - - uksouth endpoint-global Succeeded privatelink + uksouth endpoint-ws01 Succeeded privatelink ``` -1. You need to [configure DNS for your private endpoint](../private-link/private-endpoint-dns.md) to resolve the DNS name of the private endpoint in the virtual network. The private DNS zone name is `privatelink-global.wvd.microsoft.com`. For the steps to create and configure the private DNS zone with Azure CLI, see [Configure the private DNS zone](../private-link/create-private-endpoint-cli.md#configure-the-private-dns-zone). +1. You need to [configure DNS for your private endpoint](../private-link/private-endpoint-dns.md) to resolve the DNS name of the private endpoint in the virtual network. The private DNS zone name is `privatelink.wvd.microsoft.com`. For the steps to create and configure the private DNS zone with Azure CLI, see [Configure the private DNS zone](../private-link/create-private-endpoint-cli.md#configure-the-private-dns-zone). ++++> [!IMPORTANT] +> You need to a create private endpoint for the feed sub-resource for each workspace you want to use with Private Link. ++### Initial feed discovery ++To create a private endpoint for the *global* sub-resource used for the initial feed discovery, select the relevant tab for your scenario and follow the steps. ++> [!IMPORTANT] +> - Only create one private endpoint for the *global* sub-resource for all your Azure Virtual Desktop deployments. +> +> - A private endpoint to the global sub-resource of any workspace controls the shared fully qualified domain name (FQDN) for initial feed discovery. This in turn enables feed discovery for all workspaces. Because the workspace connected to the private endpoint is so important, deleting it will cause all feed discovery processes to stop working. We recommend you create an unused placeholder workspace for the global sub-resource. ++# [Portal](#tab/portal) ++1. From the Azure Virtual Desktop overview, select **Workspaces**, then select the name of a workspace you want to use for the global sub-resource. ++ 1. *Optional*: Instead, create a placeholder workspace to terminate the global endpoint by following the instructions to [Create a workspace](create-application-group-workspace.md?tabs=portal#create-a-workspace). ++1. From the workspace overview, select **Networking**, then **Private endpoint connections**, and finally **New private endpoint**. ++1. On the **Basics** tab, complete the following information: ++ | Parameter | Value/Description | + |--|--| + | Subscription | Select the subscription you want to create the private endpoint in from the drop-down list. | + | Resource group | This automatically defaults to the same resource group as your workspace for the private endpoint, but you can also select an alternative existing one from the drop-down list, or create a new one. | + | Name | Enter a name for the new private endpoint. | + | Network interface name | The network interface name fills in automatically based on the name you gave the private endpoint, but you can also specify a different name. | + | Region | This automatically defaults to the same Azure region as the workspace and is where the private endpoint will be deployed. This must be the same region as your virtual network. | ++ Once you've completed this tab, select **Next: Resource**. ++1. On the **Resource** tab, validate the values for *Subscription*, *Resource type*, and *Resource*, then for **Target sub-resource**, select **global**. Once you've completed this tab, select **Next: Virtual Network**. ++1. On the **Virtual Network** tab, complete the following information: ++ | Parameter | Value/Description | + |--|--| + | Virtual network | Select the virtual network you want to create the private endpoint in from the drop-down list. | + | Subnet | Select the subnet of the virtual network you want to create the private endpoint in from the drop-down list. | + | Network policy for private endpoints | Select **edit** if you want to choose a subnet network policy. For more information, see [Manage network policies for private endpoints](../private-link/disable-private-endpoint-network-policy.md). | + | Private IP configuration | Select **Dynamically allocate IP address** or **Statically allocate IP address**. The address space is from the subnet you selected.<br /><br />If you choose to statically allocate IP addresses, you need to fill in the **Name** and **Private IP** for each listed member. | + | Application security group | *Optional*: select an existing application security group for the private endpoint from the drop-down list, or create a new one. You can also add one later. | ++ Once you've completed this tab, select **Next: DNS**. ++1. On the **DNS** tab, choose whether you want to use [Azure Private DNS Zone](../dns/private-dns-privatednszone.md) by selecting **Yes** or **No** for **Integrate with private DNS zone**. If you select **Yes**, select the subscription and resource group in which to create the private DNS zone `privatelink-global.wvd.microsoft.com`. For more information, see [Azure Private Endpoint DNS configuration](../private-link/private-endpoint-dns.md). ++ Once you've completed this tab, select **Next: Tags**. ++1. *Optional*: On the **Tags** tab, you can enter any [name/value pairs](../azure-resource-manager/management/tag-resources.md) you need, then select **Next: Review + create**. ++1. On the **Review + create** tab, ensure validation passes and review the information that is used during deployment. ++1. Select **Create** to create the private endpoint for the global sub-resource. + # [Azure PowerShell](#tab/powershell) To create a private endpoint for the *global* sub-resource used for the initial 1. You need to [configure DNS for your private endpoint](../private-link/private-endpoint-dns.md) to resolve the DNS name of the private endpoint in the virtual network. The private DNS zone name is `privatelink-global.wvd.microsoft.com`. For the steps to create and configure the private DNS zone with Azure PowerShell, see [Configure the private DNS zone](../private-link/create-private-endpoint-powershell.md#configure-the-private-dns-zone). +# [Azure CLI](#tab/cli) ++1. *Optional*: Create a placeholder workspace to terminate the global endpoint by following the instructions to [Create a workspace](create-application-group-workspace.md?tabs=cli#create-a-workspace). ++1. In the same CLI session, create a Private Link service connection and the private endpoint for the workspace with the global sub-resource by running the following commands: ++ 1. To create a private endpoint with a dynamically allocated IP address: + + ```azurecli + # Specify the Azure region. This must be the same region as your virtual network. + location=<Location> + + # Get the resource ID of the workspace + workspaceId=$(az desktopvirtualization workspace show \ + --name <WorkspaceName> \ + --resource-group <ResourceGroupName> \ + --query [id] \ + --output tsv) + + # Create a service connection and the private endpoint + az network private-endpoint create \ + --name <PrivateEndpointName> \ + --resource-group <ResourceGroupName> \ + --location $location \ + --vnet-name <VNetName> \ + --subnet <SubnetName> \ + --connection-name <ConnectionName> \ + --private-connection-resource-id $workspaceId \ + --group-id global \ + --output table + ``` ++ 1. To create a private endpoint with statically allocated IP addresses: + + ```azurecli + # Specify the Azure region. This must be the same region as your virtual network. + location=<Location> + + # Get the resource ID of the workspace + workspaceId=$(az desktopvirtualization workspace show \ + --name <WorkspaceName> \ + --resource-group <ResourceGroupName> \ + --query [id] \ + --output tsv) + + # Store each private endpoint IP configuration in a variable + ip={name:ipconfig,group-id:global,member-name:web,private-ip-address:<IPAddress>} + + # Create a service connection and the private endpoint + az network private-endpoint create \ + --name <PrivateEndpointName> \ + --resource-group <ResourceGroupName> \ + --location $location \ + --vnet-name <VNetName> \ + --subnet <SubnetName> \ + --connection-name <ConnectionName> \ + --private-connection-resource-id $workspaceId \ + --group-id global \ + --ip-config $ip \ + --output table + ``` ++ Your output should be similar to the following. Check that the value for **ProvisioningState** is **Succeeded**. ++ ```output + CustomNetworkInterfaceName Location Name ProvisioningState ResourceGroup + - - -- - + uksouth endpoint-global Succeeded privatelink + ``` ++1. You need to [configure DNS for your private endpoint](../private-link/private-endpoint-dns.md) to resolve the DNS name of the private endpoint in the virtual network. The private DNS zone name is `privatelink-global.wvd.microsoft.com`. For the steps to create and configure the private DNS zone with Azure CLI, see [Configure the private DNS zone](../private-link/create-private-endpoint-cli.md#configure-the-private-dns-zone). + ## Closing public routes To check the connection state of each private endpoint, select the relevant tab 1. For the private endpoint listed, check the **Connection state** is **Approved**. -# [Azure CLI](#tab/cli) --1. In the same CLI session, run the following commands to check the connection state of a workspace or a host pool: -- ```azurecli - az network private-endpoint show \ - --name <PrivateEndpointName> \ - --resource-group <ResourceGroupName> \ - --query "{name:name, privateLinkServiceConnectionStates:privateLinkServiceConnections[].privateLinkServiceConnectionState}" - ``` -- Your output should be similar to the following. Check that the value for **status** is **Approved**. -- ```output - { - "name": "endpoint-ws01", - "privateLinkServiceConnectionStates": [ - { - "actionsRequired": "None", - "description": "Auto-approved", - "status": "Approved" - } - ] - } - ``` # [Azure PowerShell](#tab/powershell) To check the connection state of each private endpoint, select the relevant tab PrivateLinkServiceConnectionStateDescription : Auto-approved PrivateLinkServiceConnectionStateActionsRequired : None +# [Azure CLI](#tab/cli) ++1. In the same CLI session, run the following commands to check the connection state of a workspace or a host pool: ++ ```azurecli + az network private-endpoint show \ + --name <PrivateEndpointName> \ + --resource-group <ResourceGroupName> \ + --query "{name:name, privateLinkServiceConnectionStates:privateLinkServiceConnections[].privateLinkServiceConnectionState}" + ``` ++ Your output should be similar to the following. Check that the value for **status** is **Approved**. ++ ```output + { + "name": "endpoint-ws01", + "privateLinkServiceConnectionStates": [ + { + "actionsRequired": "None", + "description": "Auto-approved", + "status": "Approved" + } + ] + } + ``` + ### Check the status of your session hosts |
virtual-desktop | Service Principal Assign Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/service-principal-assign-roles.md | Here's how to assign a role to the Azure Virtual Desktop service principal using 1. Select **Next**, then select **Review + assign** to complete the role assignment. -# [Azure CLI](#tab/cli) --Here's how to assign a role to the Azure Virtual Desktop service principal using the [desktopvirtualization](/cli/azure/desktopvirtualization) extension for Azure CLI. ---2. Find the ID of the subscription you want to add the role assignment to by listing all that are available to you with the following command: -- ```azurecli - az account list --output table - ``` --3. Store the value for **SubscriptionId** in a variable by running the following command, replacing the subscription ID in this example with your own: -- ```azurecli - subId=00000000-0000-0000-0000-000000000000 - ``` --4. Assign the role to the Azure Virtual Desktop service principal by running the following command, replacing the value for the `role` parameter with the name of the role you need to assign. This example assigns the *Desktop Virtualization Power On Off Contributor* role to the subscription: -- ```azurecli - az role assignment create \ - --assignee "9cdead84-a844-4324-93f2-b2e6bb768d07" \ - --role "Desktop Virtualization Power On Off Contributor" \ - --scope "/subscriptions/$subId" - ``` -- Your output should be similar to the following: -- ```output - { - "condition": null, - "conditionVersion": null, - "createdBy": null, - "createdOn": "2023-06-22T13:50:22.978226+00:00", - "delegatedManagedIdentityResourceId": null, - "description": null, - "id": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.Authorization/roleAssignments/a211100e-aa52-4f8d-aac9-ad0833f969d0", - "name": "a211100e-aa52-4f8d-aac9-ad0833f969d0", - "principalId": "00000000-0000-0000-0000-000000000000", - "principalType": "ServicePrincipal", - "roleDefinitionId": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.Authorization/roleDefinitions/40c5ff49-9181-41f8-ae61-143b0e78555e", - "scope": "/subscriptions/00000000-0000-0000-0000-000000000000", - "type": "Microsoft.Authorization/roleAssignments", - "updatedBy": "effe20b0-5afb-4e68-a5d7-f8ef9873a070", - "updatedOn": "2023-06-22T13:50:23.335229+00:00" - } - ``` # [Azure PowerShell](#tab/powershell) Here's how to assign a role to the Azure Virtual Desktop service principal using Condition : ``` +# [Azure CLI](#tab/cli) ++Here's how to assign a role to the Azure Virtual Desktop service principal using the [desktopvirtualization](/cli/azure/desktopvirtualization) extension for Azure CLI. +++2. Find the ID of the subscription you want to add the role assignment to by listing all that are available to you with the following command: ++ ```azurecli + az account list --output table + ``` ++3. Store the value for **SubscriptionId** in a variable by running the following command, replacing the subscription ID in this example with your own: ++ ```azurecli + subId=00000000-0000-0000-0000-000000000000 + ``` ++4. Assign the role to the Azure Virtual Desktop service principal by running the following command, replacing the value for the `role` parameter with the name of the role you need to assign. This example assigns the *Desktop Virtualization Power On Off Contributor* role to the subscription: ++ ```azurecli + az role assignment create \ + --assignee "9cdead84-a844-4324-93f2-b2e6bb768d07" \ + --role "Desktop Virtualization Power On Off Contributor" \ + --scope "/subscriptions/$subId" + ``` ++ Your output should be similar to the following: ++ ```output + { + "condition": null, + "conditionVersion": null, + "createdBy": null, + "createdOn": "2023-06-22T13:50:22.978226+00:00", + "delegatedManagedIdentityResourceId": null, + "description": null, + "id": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.Authorization/roleAssignments/a211100e-aa52-4f8d-aac9-ad0833f969d0", + "name": "a211100e-aa52-4f8d-aac9-ad0833f969d0", + "principalId": "00000000-0000-0000-0000-000000000000", + "principalType": "ServicePrincipal", + "roleDefinitionId": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.Authorization/roleDefinitions/40c5ff49-9181-41f8-ae61-143b0e78555e", + "scope": "/subscriptions/00000000-0000-0000-0000-000000000000", + "type": "Microsoft.Authorization/roleAssignments", + "updatedBy": "effe20b0-5afb-4e68-a5d7-f8ef9873a070", + "updatedOn": "2023-06-22T13:50:23.335229+00:00" + } + ``` + ## Next steps |
virtual-machines | Move Virtual Machines Regional Zonal Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/move-virtual-machines-regional-zonal-faq.md | + + Title: FAQ - Move Azure single instance Virtual Machines from regional to zonal availability zones +description: FAQs for single instance Azure virtual machines from a regional configuration to a target Availability Zone within the same Azure region. +++ Last updated : 09/25/2023++++# Frequently asked questions - Move Azure single instance virtual machines from regional to zonal target availability zones ++This article answers common questions about Azure single instance virtual machines - regional to zonal move. ++> [!IMPORTANT] +> Regional to zonal move of single instance VM(s) configuration is currently in *Public Preview*. ++## Regional to zonal move ++### Can I move virtual machine(s) in all Azure regions? ++Currently, you can move virtual machine(s) across all public regions that are supported by Availability Zones. Learn more about the availability zone service and regional support. ++### Where is the metadata stored? ++The metadata associated with the move is stored in an Azure Cosmos DB database located in either the East US2 or North Europe regions and in Azure Blob storage in a Microsoft subscription. ++Although the coverage will eventually extend to other regions, this doesn't restrict you from moving virtual machines to other regions. The service doesn't retain any customer data, and no customer data goes outside of the source virtual machine region. ++### Is the collected metadata encrypted? ++Yes, the collected metadata is encrypted both during transit and at rest. While in transit, the metadata is securely sent to the Resource Mover service over the internet using HTTPS. The metadata is also encrypted while in storage. ++### What resources are supported for this Zonal Move? ++Currently, managed disks are supported for virtual machines that only have a single instance. ++### What source resources can be used in the target zonal configuration, if preferred? ++The following resources can be used in the target zonal configuration: +- Networking resources such as VNET, Subnet, and NSG can be reused. +- Public IP address (Standard SKU) +- Load Balancers (Standard SKU) +++### What resources are created new by default in the target zonal configuration? ++The following resources are created in the target zonal configuration: ++- **Resource group**: By default, a new resource group is automatically created. The source resource group can't be used, as we're using the same source virtual machine name in the target zone and two identical virtual machines can't coexist in the same resource group. However, you can still modify the properties of the new resource group or choose a different target resource group. +- **Virtual machine**: A copy of the source virtual machine is created in the target zonal configuration. The source virtual machine remains unchanged and is stopped after the transfer. +- **Disks**: The disks attached to the source virtual machine are recreated in the target zonal configuration. +- **NIC**: A new network interface card (NIC) is produced and linked to the newly created virtual machine in the designated zone. ++### What permissions do I need to use managed identity? ++To use the managed identity service, you must have the following permissions: ++- Permission to write or create resources in your subscription (which is available with the *Contributor* role). +- Permission to create role assignments (which is available with the *Owner* or *User Access Administrator* roles, or, custom roles that have the Microsoft.Authorization or role assignments or write permission assigned). + This permission isn't required if the data share resource's managed identity has already been granted access to the Azure data store. ++When adding resources in the portal, permissions to use managed identity are handled automatically as long as you have the appropriate role assignments. +++> [!IMPORTANT] +> We recommend that you don't modify or remove identity role assignments. ++### What if I don't have permissions to assign role identity? ++There are a couple of reasons you might not have the permissions. Consider the following scenarios: ++| Scenario | Resolution | +| | | +| You don't have Contributor and User Access Administrator (or Owner) permissions when you add a resource for the first time. | Use an account with Contributor and User Access Administrator (or Owner) permissions for the subscription.| +| The Resource Mover managed identity doesn't have the necessary role. | Add the Contributor and User Access Administrator roles. | +++### How is managed identity used? ++Managed identity previously known as Managed Service Identity (MSI), is a feature that provides Azure services with an automatically managed identity in Azure AD. This identity is used to access Azure subscriptions and perform various tasks, such as moving resources to Availability Zones. ++- Managed identity is used so that you can access Azure subscriptions to move resources to availability zones. +- To move resources using a move collection, you need a system-assigned identity that has access to the subscription containing the resources you want to move. +- If you're using the Azure portal to move the virtual machines, this process is automated once the user consent is provided. The process typically takes a few minutes to complete. ++### Can I move my resources from Regional to Zonal and across subscriptions? ++You can use Azure Resource Manager to move virtual machines from a regional to a zonal deployment within the same subscription, and then move them across subscriptions. ++### Are Azure Backup/DR, RBAC, Tags, Policies, and extensions on virtual machines supported? ++Only tags and user assigned managed identities are replicated to the target zones. RBAC, policies and extensions must be reconfigured after the move. See the support matrix for further details. ++### Is customer data stored during the move? ++Customer data isn't stored during the move. The system only stores metadata information that helps track and monitor the progress of the resources being moved. ++### What happens to the source virtual machine(s)? ++When you select **Move**, the following steps are performed on the source virtual machines: ++1. The source virtual machines are stopped and left intact in their original configuration. +2. Virtual machine restore points of the source virtual machine are taken. These restore points contain a disk restore point for each of the attached disks and a disk restore point consists of a snapshot of an individual managed disk. +3. Using these restore points, a new virtual machine with its associated disks (a copy of the source) is created in the zonal configuration. +4. After the move is complete, you can choose to delete the source virtual machines. +++### Is there any cost associated as part of this move? ++The Zonal Move feature of virtual machines is offered free of cost, but you may incur cost of goods for the creation of disk snapshots or restore points. ++> [!NOTE] +> The snapshot of virtual machine or disks is automatically deleted after the move is complete. ++### Can I retain my Public IP of the source virtual machine? ++Review the following scenarios where you can or can't retain Public IP addresses associated with the source virtual machine. ++| Source Property| Description | +| | | +| Public IP addresses (Basic SKU) attached to source virtual machine NIC | The source public IP address isn't retained. <br> <br> The source public IP SKU doesn’t support target zonal configuration. <br> By default, a copy of the source virtual machine and a new network interface card (NIC) is created. The source virtual machine and NIC are left intact after the move and the source virtual machine will be in a shutdown state.| +| Public IP addresses (Standard SKU) attached to source virtual machine NIC | The source Public IP address isn't retained. <br><br> A new NIC and a copy of the source virtual machine (VM) is created, and both the source virtual machine and NIC will remain intact after the move. However, the virtual machine will be in a shutdown state. <br><br> **Note**: After the move, if you wish, you can separate the source public IP from the source NIC and connect it to a new target zonal virtual machine NIC. | +| Public IP address (Basic SKU) attached to Load Balancer (Basic SKU) | The source public IP address isn't retained. <br><br> Source Public IP SKU doesn’t support target zonal configuration.| +| Public IP address (Standard SKU) with Non-Zonal configuration attached to Load Balancer (Standard SKU) | Source Public IP address is retained. | +| Public IP address (Standard SKU) with Zone pined configuration attached to Load Balancer (Standard SKU)| Source Public IP address will be retained. <br><br> **Note:** The target virtual machine zone# might not be the same as the zone pinned Public IP.| +| Public IP address (Standard SKU) with Zone redundant configuration attached to Load Balancer (Standard SKU)| Source Public IP address is retained.| ++## Next steps ++- Learn more about [moving single instance Azure VMs from regional to zonal configuration](../reliability/migrate-vm.md#migration-option-2-vm-regional-to-zonal-move). |
virtual-machines | Move Virtual Machines Regional Zonal Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/move-virtual-machines-regional-zonal-portal.md | + + Title: Tutorial - Move Azure single instance Virtual Machines from regional to zonal availability zones +description: Learn how to move single instance Azure virtual machines from a regional configuration to a target Availability Zone within the same Azure region. +++ Last updated : 09/25/2023++++# Move Azure single instance VMs from regional to zonal target availability zones ++This article provides information on how to move Azure single instance Virtual Machines (VMs) from a regional to a zonal configuration within the same Azure region. ++> [!IMPORTANT] +> Regional to zonal move of single instance VM(s) configuration is currently in *Public Preview*. +## Prerequisites ++Ensure the following before you begin: ++- **Availability zone regions support**: Ensure that the regions you want to move to are supported by Availability Zones. [Learn more](../reliability/availability-zones-service-support.md) about the supported regions. ++- **VM SKU availability**: The availability of VM sizes, or SKUs, can differ based on the region and zone. Ensure to plan for the use of Availability Zones. [Learn more](../virtual-machines/windows/create-powershell-availability-zone.md#check-vm-sku-availability) about the available VM SKUs for each Azure region and zone. ++- **Subscription permissions**: Check that you have *Owner* access on the subscription containing the VMs that you want to move. + The first time you add a VM to be moved to Zonal configuration, a [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) (formerly, Managed Service Identify (MSI)) that's trusted by the subscription is necessary. To create the identity, and to assign it the required role (Contributor or User Access administrator in the source subscription), the account you use to add resources needs Owner permissions on the subscription. [Learn more](../role-based-access-control/rbac-and-directory-admin-roles.md#azure-roles) about Azure roles. ++- **VM support**: Check that the VMs you want to move are supported. [Learn more](../reliability/migrate-vm.md). Check supported VM settings. + +- **Subscription quota**: The subscription must have enough quota to create the new VM and associated networking resources in target zonal configuration (in same region). If the subscription doesn't have enough quota, you need to [request additional limits](../azure-resource-manager/management/azure-subscription-service-limits.md). +- **VM health status**: The VMs you want to move must be in a healthy state before attempting the zonal move. Ensure that all pending reboots and mandatory updates are complete. ++## Select and move VMs ++To select the VMs you want to move from Regional to Zonal configuration within same region, follow these steps: ++### Select the VMs ++To select the VMs for the move, follow these steps: ++1. On the [Azure portal](https://ms.portal.azure.com/#home), select the VM. In this tutorial, we're using **DemoTestVM1** as an example. + + :::image type="content" source="./media/tutorial-move-regional-zonal/demo-test-machine.png" alt-text="Screenshot of demo virtual machine."::: + +2. In the DemoTestVM1 resource pane, select **Availability + scaling** > **edit**. + :::image type="content" source="./media/tutorial-move-regional-zonal/availability-scaling.png" alt-text="Screenshot of Availability + scaling option."::: ++ Alternatively, in the **DemoTestVM1** overview plane, you can select **Availability + scale** > **Availability + scaling**. + :::image type="content" source="./media/tutorial-move-regional-zonal/availability-scaling-home.png" alt-text="Screenshot of Availability + scaling homepage."::: + +### Select the target availability zones ++To select the target availability zones, follow these steps: ++1. Under **Target availability zone**, select the desired target availability zones for the VM. For example, Zone 1. + + + >[!Important] + >If you select an unsupported VM to move, the validation fails. In this case, you must restart the workflow with the correct selection of VM. Refer to the [Support Matrix](../reliability/migrate-vm.md#support-matrix) to learn more about unsupported VMs type. ++1. If Azure recommends optimizing the VM size, you must select the appropriate VM size that can increase the chances of successful deployment in the selected zone. Alternatively, you can also change the zone while keeping the same source VM size. + + :::image type="content" source="./media/tutorial-move-regional-zonal/aure-recommendation.png" alt-text="Screenshot showing Azure recommendation to increase virtual machine size."::: ++1. Select the consent statement for **System Assigned Managed Identity** process then select **Next**. ++ :::image type="content" source="./media/tutorial-move-regional-zonal/move-virtual-machine-availability-zone.png" alt-text="Screenshot of select target availability zone."::: + + The MSI authentication process takes a few minutes to complete. During this time, the updates on the progress are displayed on the screen. + +### Review the properties of the VM + +To review the properties of the VM before you commit the move, follow these steps: ++1. On the **Review properties** pane, review the VM properties. + #### VM properties + + Find more information on the impact of the move on the VM properties. + + **The following source VM properties are retained in the target zonal VM by default:** + + | Property | Description | + | | | + | VM name | Source VM name is retained in the target zonal VM by default. | + | VNET | By default, the source VNET is retained and target zonal VM is created within the same VNET. You can also create a new VNET or choose an existing from target zonal configuration. | + | Subnet | By default, the source subnet is retained, and the target zonal virtual machine is created within the same subnet. You can create a new subnet or choose an existing from target zonal configuration. | + | NSG | Source NSG is retained by default and target zonal VM are created within the same NSG. You can create a new NSG or choose an existing from target zonal configuration. | + | Load balancer (Standard SKU) | Standard SKU Load balance supports target zonal configuration and are retained. | + | Public IP (Standard SKU) | Standard SKU PIP supports target zonal configuration and are retained. | + + **The following source VM properties are created in the target zonal VM by default:** + + | Property | Description | + | | | + | VM | A copy of the source VM is created in the target zonal configuration. The source VM is left intact and stopped after the move. <br> Source VM ARM ID is not retained. | + | Resource group | By default, a new resource group is created as the source resource group can't be utilized. This is because we're using the same source VM name in the target zone, it is not possible to have two identical VMs in the same resource group. <br> However, you can move the VM to an existing resource group in the target zone. | + | NIC | A new NIC is created in the target zonal configuration. The source NIC is left intact and stopped after the move. <br> Source NIC ARM ID is not retained. | + | Disks | The disks attached to the source VM are recreated with a new disk name in the target zonal configuration and is attached to the newly created zonal VM.| + | Load balancer (Basic SKU) | Basic SKU Load balance won't support target zonal configuration and hence isn't retained. <br> A new Standard SKU Load balancer is created by default. <br> However, you can still edit the load balancer properties, or you can select an existing target load balancer as well.| + | Public IP (Basic SKU) | Basic SKU Public IPs won't be retained after the move as they don't support target zonal configurations. <br> By default, a new Standard SKU Public IP is created. <br> However, you can still edit the Public IP properties or you can select an existing target Public IP as well.| ++2. Review and fix if there are any errors. + +3. Select the consent statement at the bottom of the page before moving the resources. + :::image type="content" source="./media/tutorial-move-regional-zonal/migrate-vms.png" alt-text="Screenshot of migrating virtual machine page."::: + +### Move the VMs ++Select **Move** to complete the move to Availability zones. +++During this process: +* The source virtual machine is stopped hence, there's a brief downtime. +* A copy of the source VM is created in the target zonal configuration and the new virtual machine is up and running. ++### Configure settings post move ++Review all the source VM settings and reconfigure extensions, RBACs, Public IPs, Backup/DR etc. as desired. ++### Delete source VM ++The source VM remains in a stopped mode after the move is complete. You can choose to either delete it or use it for another purpose, based on your requirements. ++## Delete additional resources created for move ++After the move, you can manually delete the move collection that was created. ++To manually remove the move collection that was made, follow these steps: ++1. Ensure you can view hidden resources as the move collection is hidden by default. +2. Select the Resource group of the move collection using the search string *ZonalMove-MC-RG-SourceRegion*. +3. Delete the move collection. For example, *ZonalMove-MC-RG-UKSouth*. +++> [!NOTE] +> The move collection is hidden and must be turned on to view it. ++## Next steps ++Learn how to move single instance Azure VMs from regional to zonal configuration using [PowerShell or CLI](./move-virtual-machines-regional-zonal-powershell.md). |
virtual-machines | Move Virtual Machines Regional Zonal Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/move-virtual-machines-regional-zonal-powershell.md | + + Title: Move Azure single instance Virtual Machines from regional to zonal availability zones using PowerShell and CLI +description: Move single instance Azure virtual machines from a regional configuration to a target Availability Zone within the same Azure region using PowerShell and CLI. +++ Last updated : 09/25/2023++++# Move a virtual machine in an availability zone using Azure PowerShell and CLI ++This article details using Azure PowerShell and CLI cmdlets to move Azure single instance VMs from regional to zonal availability zones. An [availability zone](../availability-zones/az-overview.md) is a physically separate zone in an Azure region. Use availability zones to protect your apps and data from an unlikely failure or loss of an entire data center. ++To use an availability zone, create your virtual machine in a [supported Azure region](../availability-zones/az-region.md). ++> [!IMPORTANT] +> Regional to zonal move of single instance VM(s) configuration is currently in *Public Preview*. ++## Prerequisites ++Verify the following requirements: ++| Requirement | Description | +| | | +| **Subscription permissions** | Ensure you have *Owner* access on the subscription containing the resources that you want to move.<br/><br/> [Managed identity](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) needs these permissions: <br> - Permission to write or create resources in user subscription, available with the *Contributor role*. <br> - Permission to create role assignments. Typically available with the *Owner* or *User Access Administrator* roles, or with a custom role that has the `Microsoft.Authorization` role assignments or write permission assigned. This permission isn't needed if the data share resource's managed identity is already granted access to the Azure data store. <br> [Learn more](../role-based-access-control/rbac-and-directory-admin-roles.md#azure-roles) about Azure roles. | +| **VM support** | [Review](../resource-mover/common-questions.md) the supported regions. <br><br> - Check supported [compute](../resource-mover/support-matrix-move-region-azure-vm.md#supported-vm-compute-settings), [storage](../resource-mover/support-matrix-move-region-azure-vm.md#supported-vm-storage-settings), and [networking](../resource-mover/support-matrix-move-region-azure-vm.md#supported-vm-networking-settings) settings.| +| **VM health status** | The VMs you want to move must be in a healthy state before attempting the zonal move. Ensure that all pending reboots and mandatory updates are complete and the Virtual Machine is working and is in a healthy state before attempting the VM zonal move. | +++### Review PowerShell and CLI requirements ++Most move resources operations are the same whether using the Azure portal or PowerShell or CLI, with a couple of exceptions. ++| Operation | Portal | PowerShell/CLI | +| | | | +| **Create a move collection** | A move collection (a list of all the regional VMs that you're moving) is created automatically. Required identity permissions are assigned in the backend by the portal. | You can use [PowerShell cmdlets](/powershell/module/az.resourcemover/?view=azps-10.3.0#resource-mover) or [CLI cmdlets](https://learn.microsoft.com/cli/azure/resource-mover?view=azure-cli-latest) to: <br> - Assign a managed identity to the collection. <br> - Add regional VMs to the collection. | +| **Resource move operations** | Validate steps and validates the *User* setting changes. **Initiate move** starts the move process and creates a copy of source VM in the target zone. It also finalizes the move of the newly created VM in the target zone. | [PowerShell cmdlets](/powershell/module/az.resourcemover/?view=azps-10.3.0#resource-mover) or [CLI cmdlets](https://learn.microsoft.com/cli/azure/resource-mover?view=azure-cli-latest) to: <br> - Add regional VMs to the collection <br> - Resolve dependencies <br> - Perform the move. <br> - Commit the move. | ++### Sample values ++We use these values in our script examples: ++| Setting | Value | +| | | +| Subscription ID | subscription-id | +| Move Region | East US | +| Resource group (holding metadata for move collection) | RegionToZone-DemoMCRG | +| Move collection name | RegionToZone-DemoMC | +| Location of the move collection | eastus2euap | +| IdentityType | SystemAssigned | +| VM name | demoVM-MoveResource | +| Move Type | RegionToZone | ++## Sign in to Azure ++Sign in to your Azure subscription with the `Connect-AzAccount` command and follow the on-screen directions. ++```powershell-interactive +Connect-AzAccount ΓÇôSubscription "<subscription-id>" +``` ++## Set up the move collection ++The MoveCollection object stores metadata and configuration information about the resources you want to move. To set up a move collection, do the following: ++- Create a resource group for the move collection. +- Register the service provider to the subscription, so that the MoveCollection resource can be created. +- Create the MoveCollection object with managed identity. For the MoveCollection object to access the subscription in which the Resource Mover service is located, it needs a [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) (formerly known as Managed Service Identity (MSI)) that's trusted by the subscription. +- Grant access to the Resource Mover subscription for the managed identity. ++## Create the resource group ++Use the following cmdlet to create a resource group for the move collection metadata and configuration information with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). A resource group is a logical container into which Azure resources are deployed and managed. ++# [PowerShell](#tab/PowerShell) ++```powershell-interactive +New-AzResourceGroup -Name "RegionToZone-DemoMCRG" -Location "EastUS" +``` ++**Output**: ++The output shows that the managed disk is in the same availability zone as the VM: ++```powershell +ResourceGroupName : RegionToZone-DemoMCRG +Location : eastus +ProvisioningState : Succeeded +Tags : + Name Value + ======= ======== + Created 20230908 ++ResourceId : /subscriptions/<Subscription-id>/resourceGroups/RegionToZone-DemoMCRG +``` ++# [CLI](#tab/CLI) ++```azurecli-interactive +az group create --location eastus2 --name clidemo-RG +``` ++**Output**: ++```azurecli +{ + "id": "/subscriptions/e80eb9fa-c996-4435-aa32-5af6f3d3077c/resourceGroups/clidemo-RG", + "location": "eastus", + "managedBy": null, + "name": "clidemo-RG", + "properties": { + "provisioningState": "Succeeded" + }, + "tags": { + "Created": "20230921" + }, + "type": "Microsoft.Resources/resourceGroups" +} +``` ++++## Register the resource provider ++1. Register the resource provider Microsoft.Migrate, so that the MoveCollection resource can be created, as follows: ++ ```azurepowershell-interactive + Register-AzResourceProvider -ProviderNamespace Microsoft.Migrate + ``` ++2. Wait for registration: ++ ```azurepowershell-interactive + While(((Get-AzResourceProvider -ProviderNamespace Microsoft.Migrate)| where {$_.RegistrationState -eq "Registered" -and $_.ResourceTypes.ResourceTypeName -eq "moveCollections"}|measure).Count -eq 0) + { + Start-Sleep -Seconds 5 + Write-Output "Waiting for registration to complete." + } + ``` ++## Create a MoveCollection object ++Create a MoveCollection object, and assign a managed identity to it, as follows: ++# [PowerShell](#tab/PowerShell) ++```azurepowershell-interactive +New-AzResourceMoverMoveCollection -Name "RegionToZone-DemoMC" -ResourceGroupName "RegionToZone-DemoMCRG" -MoveRegion "eastus" -Location "eastus2euap" -IdentityType "SystemAssigned" -MoveType "RegionToZone" +``` ++**Output**: ++```powershell +Etag Location Name +- -- - +"3a00c441-0000-3400-0000-64fac1b30000" eastus2euap RegionToZone-DemoMC +``` ++# [CLI](#tab/CLI) ++```azurecli-interactive +az resource-mover move-collection create --identity type=SystemAssigned --location eastus2 --move-region uksouth --name cliDemo-zonalMC --resource-group clidemo-RG --move-type RegionToZone +``` ++**Output**: ++```azurecli +{ + "etag": "\"1c00c55a-0000-0200-0000-650c15c40000\"", + "id": "/subscriptions/e80eb9fa-c996-4435-aa32-5af6f3d3077c/resourceGroups/clidemo-RG/providers/Microsoft.Migrate/moveCollections/cliDemo-zonalMC", + "identity": { + "principalId": "45bc279c-3353-4f6a-bb4f-8efb48faba59", + "tenantId": "72f988bf-86f1-41af-91ab-2d7cd011db47", + "type": "SystemAssigned" + }, + "location": "eastus2", + "name": "cliDemo-zonalMC", + "properties": { + "moveRegion": "uksouth", + "moveType": "RegionToZone", + "provisioningState": "Succeeded", + "version": "V2" + }, + "resourceGroup": "clidemo-RG", + "systemData": { + "createdAt": "2023-09-21T10:06:58.5788527Z", + "createdBy": "yashjain@microsoft.com", + "createdByType": "User", + "lastModifiedAt": "2023-09-21T10:06:58.5788527Z", + "lastModifiedBy": "yashjain@microsoft.com", + "lastModifiedByType": "User" + }, + "type": "Microsoft.Migrate/moveCollections" +} +``` ++++>[!NOTE] +> For Regional to zonal move, the `MoveType` parameter should be set as *RegionToZone* and `MoveRegion` parameter should be set as the location where resources undergoing zonal move reside. Ensure that the parameters `SourceRegion` and `TargetRegion` are not required and should be set to *null*. ++## Grant access to the managed identity ++Grant the managed identity access to the Resource Mover subscription as follows. You must be the subscription owner. ++1. Retrieve identity details from the MoveCollection object. ++ ```azurepowershell-interactive + $moveCollection = Get-AzResourceMoverMoveCollection -Name "RegionToZone-DemoMC" -ResourceGroupName "RegionToZone-DemoMCRG" + $identityPrincipalId = $moveCollection.IdentityPrincipalId + ``` ++2. Assign the required roles to the identity so Azure Resource Mover can access your subscription to help move resources. Review the list of [required permissions](../resource-mover/common-questions.md#what-managed-identity-permissions-does-resource-mover-need) for the move. ++ # [PowerShell](#tab/PowerShell) +++ ```azurepowershell-interactive + New-AzRoleAssignment -ObjectId $identityPrincipalId -RoleDefinitionName Contributor -Scope "/subscriptions/<subscription-id>"" + New-AzRoleAssignment -ObjectId $identityPrincipalId -RoleDefinitionName "User Access Administrator" -Scope "/subscriptions/<subscription-id>" + ``` ++ # [CLI](#tab/CLI) ++ ```azurecli-interactive + az role assignment create --assignee-object-id 45bc279c-3353-4f6a-bb4f-8efb48faba59 --assignee-principal-type ServicePrincipal --role Contributor --scope /subscriptions/<Subscription-id> + az role assignment create --assignee-object-id 45bc279c-3353-4f6a-bb4f-8efb48faba59 --assignee-principal-type ServicePrincipal --role "User Access Administrator" --scope /subscriptions/<Subscription-id> ++ ``` ++ ++## Add regional VMs to the move collection ++Retrieve the IDs for existing source resources that you want to move. Create the destination resource settings object, then add resources to the move collection. ++> [!NOTE] +> Resources added to a move collection must be in the same subscription but can be in different resource groups. ++1. Create target resource setting object as follows: ++ ```azurepowershell-interactive + $targetResourceSettingsObj = New-Object Microsoft.Azure.PowerShell.Cmdlets.ResourceMover.Models.Api20230801.VirtualMachineResourceSettings + $targetResourceSettingsObj.ResourceType = "Microsoft.Compute/virtualMachines" + $targetResourceSettingsObj.TargetResourceName = "RegionToZone-demoTargetVm" + $targetResourceSettingsObj.TargetAvailabilityZone = "2" + ``` + + **Output** <br> ++ ```powershell + ResourceType TargetResourceGroupName TargetResourceName TargetAvailabilitySetId TargetAvailabilityZone TargetVMSize UserManagedIdentity + -- -- - - + Microsoft.Compute/virtualMachines RegionToZone-demoTargetVm 2 + ``` +++1. Add resources ++ # [PowerShell](#tab/PowerShell) ++ ```azurepowershell-interactive + Add-AzResourceMoverMoveResource -ResourceGroupName "RegionToZone-DemoMCRG" -MoveCollectionName "RegionToZone-DemoMC" -SourceId "/subscriptions/<Subscription-id>/resourcegroups/PS-demo-RegionToZone-RG/providers/Microsoft.Compute/virtualMachines/RegionToZone-demoSourceVm" -Name "demoVM-MoveResource" -ResourceSetting $targetResourceSettingsObj + ``` ++ **Output** ++ ```powershell + DependsOn : {} + DependsOnOverride : {} + ErrorsPropertiesCode : + ErrorsPropertiesDetail : + ErrorsPropertiesMessage : + ErrorsPropertiesTarget : + ExistingTargetId : + Id : /subscriptions/<Subscription-id>/resourceGroups/RegionToZone-DemoMCRG/providers/Microsoft.Migrate/moveCollections/Re + gionToZone-DemoMC/moveResources/demoVM-MoveResource + IsResolveRequired : False + JobStatusJobName : + JobStatusJobProgress : + MoveStatusErrorsPropertiesCode : DependencyComputationPending + MoveStatusErrorsPropertiesDetail : {} + MoveStatusErrorsPropertiesMessage : The dependency computation is not completed for resource - /subscriptions/<Subscription-id>/resourcegroups/PS-demo-R + egionToZone-RG/providers/Microsoft.Compute/virtualMachines/RegionToZone-demoSourceVm'. + Possible Causes: Dependency computation is pending for resource. + Recommended Action: Validate dependencies to compute the dependencies. + + MoveStatusErrorsPropertiesTarget : + MoveStatusMoveState : MovePending + Name : demoVM-MoveResource + ProvisioningState : Succeeded + ResourceSetting : Microsoft.Azure.PowerShell.Cmdlets.ResourceMover.Models.Api20230801.VirtualMachineResourceSettings + SourceId : /subscriptions/<Subscription-id>/resourcegroups/PS-demo-RegionToZone-RG/providers/Microsoft.Compute/virtualMachines/ + RegionToZone-demoSourceVm + SourceResourceSetting : Microsoft.Azure.PowerShell.Cmdlets.ResourceMover.Models.Api20230801.VirtualMachineResourceSettings + SystemDataCreatedAt : 9/8/2023 6:48:11 AM + SystemDataCreatedBy : xxxxx@microsoft.com + SystemDataCreatedByType : User + SystemDataLastModifiedAt : 9/8/2023 6:48:11 AM + SystemDataLastModifiedBy : xxxxx@microsoft.com + SystemDataLastModifiedByType : User + TargetId : + Type : + ``` ++ # [CLI](#tab/CLI) ++ ```azurecli-interactive + az resource-mover move-resource add --resource-group clidemo-RG --move-collection-name cliDemo-zonalMC --name vm-demoMR --source-id "/subscriptions/e80eb9fa-c996-4435-aa32-5af6f3d3077c/resourceGroups/regionToZone-bugBash/providers/Microsoft.Compute/virtualMachines/regionToZone-test-LRS" --resource-settings '{ "resourceType": "Microsoft.Compute/virtualMachines", "targetResourceName": "regionToZone-test-LRS", "targetAvailabilityZone": "2", "targetVmSize": "Standard_B2s" }' + ``` + **Output** ++ ```azurecli + { + "id": "/subscriptions/e80eb9fa-c996-4435-aa32-5af6f3d3077c/resourceGroups/clidemo-RG/providers/Microsoft.Migrate/moveCollections/cliDemo-zonalMC/moveResources/vm-demoMR", + "name": "vm-demoMR", + "properties": { + "dependsOn": [], + "dependsOnOverrides": [], + "isResolveRequired": false, + "moveStatus": { + "errors": { + "properties": { + "code": "DependencyComputationPending", + "details": [], + "message": "The dependency computation is not completed for resource - /subscriptions/e80eb9fa-c996-4435-aa32-5af6f3d3077c/resourceGroups/regionToZone-bugBash/providers/Microsoft.Compute/virtualMachines/regionToZone-test-LRS'.\n Possible Causes: Dependency computation is pending for resource.\n Recommended Action: Validate dependencies to compute the dependencies.\n " + } + }, + "moveState": "MovePending" + }, + "provisioningState": "Succeeded", + "resourceSettings": { + "resourceType": "Microsoft.Compute/virtualMachines", + "targetAvailabilityZone": "2", + "targetResourceName": "regionToZone-test-LRS", + "targetVmSize": "Standard_B2s" + }, + "sourceId": "/subscriptions/e80eb9fa-c996-4435-aa32-5af6f3d3077c/resourceGroups/regionToZone-bugBash/providers/Microsoft.Compute/virtualMachines/regionToZone-test-LRS", + "sourceResourceSettings": { + "resourceType": "Microsoft.Compute/virtualMachines", + "tags": { + "azsecpack": "nonprod", + "platformsettings.host_environment.service.platform_optedin_for_rootcerts": "true" + }, + "targetResourceName": "regionToZone-test-LRS", + "targetVmSize": "Standard_B2s", + "userManagedIdentities": [ + "/subscriptions/e80eb9fa-c996-4435-aa32-5af6f3d3077c/resourceGroups/AzSecPackAutoConfigRG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/AzSecPackAutoConfigUA-uksouth" + ] + } + }, + "resourceGroup": "clidemo-RG", + "systemData": { + "createdAt": "2023-09-21T10:35:03.2036685Z", + "createdBy": "yashjain@microsoft.com", + "createdByType": "User", + "lastModifiedAt": "2023-09-21T10:35:03.2036685Z", + "lastModifiedBy": "yashjain@microsoft.com", + "lastModifiedByType": "User" + } + } + ``` + + + +## Modify settings ++You can modify destination settings when moving Azure VMs and associated resources. We recommend that you only change destination settings before you validate the move collection. ++**Settings that you can modify are:** ++- **Virtual machine settings:** Resource group, VM name, VM availability zone, VM SKU, VM key vault, and Disk encryption set. +- **Networking resource settings:** For Network interfaces, virtual networks (VNets/), and network security groups/network interfaces, you can either: + - Use an existing networking resource in the destination region. + - Create a new resource with a different name. +- **Public IP/Load Balancer:** SKU and Zone +++Modify settings as follows: ++1. Retrieve the move resource for which you want to edit properties. For example, to retrieve a VM run: ++ ```azurepowershell-interactive + $moveResourceObj = Get-AzResourceMoverMoveResource -MoveCollectionName " RegionToZone-DemoMCRG " -ResourceGroupName " RegionToZone-DemoMC " -Name "PSDemoVM" + `````` ++2. Copy the resource setting to a target resource setting object. ++ ```azurepowershell-interactive + $TargetResourceSettingObj = $moveResourceObj.ResourceSetting + ``` ++3. Set the parameter in the target resource setting object. For example, to change the name of the destination VM: ++ ```azurepowershell-interactive + $TargetResourceSettingObj.TargetResourceName="PSDemoVM-target" + ``` +4. Update the move resource destination settings. In this example, we change the name of the VM from PSDemoVM to PSDemoVMTarget. ++ ```azurepowershell-interactive + Update-AzResourceMoverMoveResource -ResourceGroupName " RegionToZone-DemoMCRG " -MoveCollectionName " RegionToZone-DemoMC -SourceId "/subscriptions/<Subscription-d>/resourceGroups/PSDemoRM/providers/Microsoft.Compute/virtualMachines/PSDemoVM" -Name "PSDemoVM" -ResourceSetting $TargetResourceSettingObj + ``` +++## Resolve dependencies ++Check whether the regional VMs you added have any dependencies on other resources, and add as needed. ++1. Resolve dependencies as follows: + + # [PowerShell](#tab/PowerShell) + + ``` + Resolve-AzResourceMoverMoveCollectionDependency -ResourceGroupName "RegionToZone-DemoMCRG" -MoveCollectionName "RegionToZone-DemoMC" + ``` + + **Output (when dependencies exist)** + + ```powershell + AdditionalInfo : + Code : + Detail : + EndTime : 9/8/2023 6:52:14 AM + Id : /subscriptions/<Subscription-id>/resourceGroups/RegionToZone-DemoMCRG/providers/Microsoft.Migrate/moveCollections/RegionToZone-DemoMC/o + perations/bc68354b-ec1f-44cb-92ab-fb3b4ad90229 + Message : + Name : bc68354b-ec1f-44cb-92ab-fb3b4ad90229 + Property : Microsoft.Azure.PowerShell.Cmdlets.ResourceMover.Models.Any + StartTime : 9/8/2023 6:51:50 AM + Status : Succeeded + ``` + + # [CLI](#tab/CLI) + + ```azurecli-interactive + az resource-mover move-collection resolve-dependency --name cliDemo-zonalMC --resource-group clidemo-RG + ``` + **Output (when dependencies exist)** + + ```azurecli + { + "endTime": "9/21/2023 10:46:30 AM", + "id": "/subscriptions/e80eb9fa-c996-4435-aa32-5af6f3d3077c/resourceGroups/clidemo-RG/providers/Microsoft.Migrate/moveCollections/cliDemo-zonalMC/operations/9bd337d0-90d5-4537-bdab-a7c0cd33e6d5", + "name": "9bd337d0-90d5-4537-bdab-a7c0cd33e6d5", + "resourceGroup": "clidemo-RG", + "startTime": "9/21/2023 10:46:17 AM", + "status": "Succeeded" + } + ``` + + +++1. To get a list of resources added to the move collection: + # [PowerShell](#tab/PowerShell) ++ ```azurepowershell-interactive + $list = Get-AzResourceMoverMoveResource -ResourceGroupName "RegionToZone-DemoMCRG" -MoveCollectionName "RegionToZone-DemoMC" $list.Name + ``` ++ **Output:** ++ ```powershell + demoVM-MoveResource + mr_regiontozone-demosourcevm661_d6f18900-3b87-4fb5-9bdf-12da2f9fb185 + mr_regiontozone-demosourcevm-vnet_d8536bf5-2d5f-4778-9650-32d0570bc41a + mr_regiontozone-demosourcevm-ip_6af03f1f-eae8-4541-83f5-97a2506cfc3e + mr_regiontozone-demosourcevm-nsg_98d68420-d7ff-4e2d-b758-25a6df80fca7 + mr_nrms-timkbo3hy3nnmregiontozone-demosourcevm-vnet_f474c880-4823-4ed3-b761-96df6500f6a3 + ``` ++ # [CLI](#tab/CLI) + + ```azurecli-interactive + az resource-mover move-resource list --move-collection-name cliDemo-zonalMC --resource-group clidemo-RG + ``` + ++1. To remove resources from the resource collection, follow these [instructions](../resource-mover/remove-move-resources.md). + ++## Initiate move of VM resources ++# [PowerShell](#tab/PowerShell) ++```azurepowershell +Invoke-AzResourceMoverInitiateMove -ResourceGroupName "RegionToZone-DemoMCRG" -MoveCollectionName "RegionToZone-DemoMC" -MoveResource $("demoVM-MoveResource") -MoveResourceInputType "MoveResourceId" +``` ++**Output** ++```powershell +AdditionalInfo : +Code : +Detail : +EndTime : 9/8/2023 7:07:58 AM +Id : /subscriptions/<Subscription-id>/resourceGroups/RegionToZone-DemoMCRG/providers/Microsoft.Migrate/moveCollections/RegionToZone-DemoMC/o + perations/d3e06ac3-a961-4045-8301-aee7f6911160 +Message : +Name : d3e06ac3-a961-4045-8301-aee7f6911160 +Property : Microsoft.Azure.PowerShell.Cmdlets.ResourceMover.Models.Any +StartTime : 9/8/2023 7:01:31 AM +Status : Succeeded +``` ++# [CLI](#tab/CLI) ++```azurecli-interactive +az resource-mover move-collection initiate-move --move-resources "/subscriptions/e80eb9fa-c996-4435-aa32-5af6f3d3077c/resourceGroups/clidemo-RG/providers/Microsoft.Migrate/moveCollections/cliDemo-zonalMC/moveResources/vm-demoMR" --validate-only false --name cliDemo-zonalMC --resource-group clidemo-RG +``` ++**Output** ++```azurecli +{ + "endTime": "9/21/2023 11:35:43 AM", + "id": "/subscriptions/e80eb9fa-c996-4435-aa32-5af6f3d3077c/resourceGroups/clidemo-RG/providers/Microsoft.Migrate/moveCollections/cliDemo-zonalMC/operations/e1086818-b38b-4332-ac69-171a2958390c", + "name": "e1086818-b38b-4332-ac69-171a2958390c", + "resourceGroup": "clidemo-RG", + "startTime": "9/21/2023 11:31:28 AM", + "status": "Succeeded" +} +``` ++++## Commit ++After the initial move, you must commit the move or discard it. **Commit** completes the move to the target region. ++**Commit the move as follows:** ++ # [PowerShell](#tab/PowerShell) ++ ``` + Invoke-AzResourceMover-VMZonalMoveCommit -ResourceGroupName "RG-MoveCollection-demoRMS" -MoveCollectionName "PS-centralus-westcentralus-demoRMS" -MoveResource $('psdemovm111', 'PSDemoRM-vnet','PSDemoVM-nsg', ΓÇÿPSDemoVMΓÇÖ) -MoveResourceInputType "MoveResourceId" + ``` ++ **Output**: ++ ```powershell + AdditionalInfo : + Code : + Detail : + EndTime : 9/22/2023 5:26:55 AM + Id : /subscriptions/e80eb9fa-c996-4435-aa32-5af6f3d3077c/resourceGroups/RegionToZone-DemoMCRG/providers/Microsoft.Migrate/moveCollections/RegionToZone-DemoMC/operations/35dd1d93-ba70-4dc9-a17f-7d8ba48678d8 + Message : + Name : 35dd1d93-ba70-4dc9-a17f-7d8ba48678d8 + Property : Microsoft.Azure.PowerShell.Cmdlets.ResourceMover.Models.Any + StartTime : 9/22/2023 5:26:54 AM + Status : Succeeded + ``` ++ # [CLI](#tab/CLI) ++ ```azurecli-interactive + az resource-mover move-collection commit --move-resources "/subscriptions/<Subscription-id>/resourceGroups/clidemo-RG/providers/Microsoft.Migrate/moveCollections/cliDemo-zonalMC/moveResources/vm-demoMR" --validate-only false --name cliDemo-zonalMC --resource-group clidemo-RG + ``` ++ **Output**: ++ ```azurecli + { + "endTime": "9/21/2023 11:47:14 AM", + "id": "/subscriptions/e80eb9fa-c996-4435-aa32-5af6f3d3077c/resourceGroups/clidemo-RG/providers/Microsoft.Migrate/moveCollections/cliDemo-zonalMC/operations/34c0d405-672f-431a-8879-582c48940b4a", + "name": "34c0d405-672f-431a-8879-582c48940b4a", + "resourceGroup": "clidemo-RG", + "startTime": "9/21/2023 11:45:13 AM", + "status": "Succeeded" + } + ``` ++ +++## Delete source regional VMs ++After you commit the move and verify that the resources work as expected in the target region, you can delete each source resource using: ++- [Azure portal](../azure-resource-manager/management/manage-resources-portal.md#delete-resources) +- [PowerShell](../azure-resource-manager/management/manage-resources-powershell.md#delete-resources) +- [Azure CLI](../azure-resource-manager/management/manage-resource-groups-cli.md#delete-resource-groups) ++## Next steps ++Learn how to move single instance Azure VMs from regional to zonal configuration via [portal](./move-virtual-machines-regional-zonal-portal.md). |
virtual-network | Configure Public Ip Bastion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-public-ip-bastion.md | Azure Bastion is deployed to provide secure management connectivity to virtual m An Azure Bastion host requires a public IP address for its configuration. -In this article, you learn how to create an Azure Bastion host using an existing public IP in your subscription. Azure Bastion doesn't support the change of the public IP address after creation. Azure Bastion doesn't support public IP prefixes. +In this article, you learn how to create an Azure Bastion host using an existing public IP in your subscription. Azure Bastion doesn't support the change of the public IP address after creation. Azure Bastion supports assigning an IP address within an IP prefix range but not assigning the IP prefix range itself. >[!NOTE] >[!INCLUDE [Pricing](../../../includes/bastion-pricing.md)] |
virtual-network | Create Vm Dual Stack Ipv6 Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-vm-dual-stack-ipv6-portal.md | -In this article, you'll create a virtual machine in Azure with the Azure portal. The virtual machine is created along with the dual-stack network as part of the procedures. When completed, the virtual machine supports IPv4 and IPv6 communication. +In this article, you create a virtual machine in Azure with the Azure portal. The virtual machine is created along with the dual-stack network as part of the procedures. When completed, the virtual machine supports IPv4 and IPv6 communication. ## Prerequisites In this article, you'll create a virtual machine in Azure with the Azure portal. ## Create a virtual network -In this section, you'll create a dual-stack virtual network for the virtual machine. +In this section, you create a dual-stack virtual network for the virtual machine. 1. Sign-in to the [Azure portal](https://portal.azure.com). In this section, you'll create a dual-stack virtual network for the virtual mach ## Create public IP addresses -You'll create two public IP addresses in this section, IPv4 and IPv6. +You create two public IP addresses in this section, IPv4 and IPv6. ++### Create IPv4 public IP address 1. In the search box at the top of the portal, enter **Public IP address**. Select **Public IP addresses** in the search results. You'll create two public IP addresses in this section, IPv4 and IPv6. | Setting | Value | | - | -- |- | IP version | Select **Both**. | - | SKU | Leave the default of **Standard**. | - | **Ipv4 IP Address Configuration** | | + | **Project details** | | + | Subscription | Select your subscription. | + | Resource group | Select **myResourceGroup**. | + | Location | Select **East US 2**. | + | Availability zone | Select **Zone redundant**. | + | **Instance details** | | | Name | Enter **myPublicIP-IPv4**. |+ | IP version | Select **IPv4**. | + | SKU | Leave the default of **Standard**. | + | Tier | Leave the default of **Regional**. | + | **IP address assignment** | | | Routing preference | Leave the default of **Microsoft network**. | | Idle timeout (minutes) | Leave the default of **4**. |- | **IPv6 IP Address Configuration** | | - | Name | Enter **myPublicIP-IPv6**. | - | Idle timeout (minutes) | Leave the default of **4**. | + | DNS name label | Enter **myPublicIP-IPv4**. | ++4. Select **Review + create** then **Create**. ++### Create IPv6 public IP address +1. In the search box at the top of the portal, enter **Public IP address**. Select **Public IP addresses** in the search results. ++2. Select **+ Create**. ++3. Enter or select the following information in **Create public IP address**. ++ | Setting | Value | + | - | -- | + | **Project details** | | | Subscription | Select your subscription. | | Resource group | Select **myResourceGroup**. | | Location | Select **East US 2**. | | Availability zone | Select **Zone redundant**. |+ | **Instance details** | | + | Name | Enter **myPublicIP-IPv6**. | + | IP version | Select **IPv6**. | + | SKU | Leave the default of **Standard**. | + | Tier | Leave the default of **Regional**. | + | **IP address assignment** | | + | DNS name label | Enter **myPublicIP-IPv6**. | -4. Select **Create**. +4. Select **Review + create** then **Create**. -### Create virtual machine +## Create virtual machine 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. You'll create two public IP addresses in this section, IPv4 and IPv6. 7. Select **Create**. -8. **Generate new key pair** will appear. Select **Download private key and create resource**. +8. **Generate new key pair** appears. Select **Download private key and create resource**. -9. The private key will download to your local computer. Copy the private key to a directory on your computer. In the following example, it's **~/.ssh**. +9. The private key downloads to your local computer. Copy the private key to a directory on your computer. In the following example, it's **~/.ssh**. 10. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. You'll create two public IP addresses in this section, IPv4 and IPv6. 12. Stop **myVM**. -### Network interface configuration +## Network interface configuration -A network interface is automatically created and attached to the chosen virtual network during creation. In this section, you'll add the IPv6 configuration to the existing network interface. +A network interface is automatically created and attached to the chosen virtual network during creation. In this section, you add the IPv6 configuration to the existing network interface. 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. A network interface is automatically created and attached to the chosen virtual ## Test SSH connection -You'll connect to the virtual machine with SSH to test the IPv4 public IP address. +You connect to the virtual machine with SSH to test the IPv4 public IP address. 1. In the search box at the top of the portal, enter **Public IP address**. Select **Public IP addresses** in the search results. |
vpn-gateway | Site To Site Vpn Private Peering | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/site-to-site-vpn-private-peering.md | Establishing connectivity is straightforward: ### Traffic from on-premises networks to Azure -For traffic from on-premises networks to Azure, the Azure prefixes are advertised via both the ExpressRoute private peering BGP, and the VPN BGP. The result is two network routes (paths) toward Azure from the on-premises networks: +For traffic from on-premises networks to Azure, the Azure prefixes are advertised via both the ExpressRoute private peering BGP, and the VPN BGP if BGP is configured on your VPN Gateway. The result is two network routes (paths) toward Azure from the on-premises networks: ΓÇó One network route over the IPsec-protected path. In both of these examples, Azure will send traffic to 10.0.1.0/24 over the VPN c :::image type="content" source="media/site-to-site-vpn-private-peering/connection.png" alt-text="Gateway Private IPs - Enabled"::: 1. Use the private IP that you wrote down in step 3 as the remote IP on your on-premises firewall to establish the Site-to-Site tunnel over the ExpressRoute private peering. + >[!NOTE] + > Configurig BGP on your VPN Gateway is not required to achieve a VPN connection over ExpressRoute private peering. + > + ## <a name="powershell"></a>PowerShell steps 1. Configure a Site-to-Site connection. For steps, see the [Configure a Site-to-Site VPN](./tutorial-site-to-site-portal.md) article. Be sure to pick a gateway with a Standard Public IP. |
vpn-gateway | Tutorial Create Gateway Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/tutorial-create-gateway-portal.md | |