Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory | Howto Authentication Methods Activity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-methods-activity.md | The **Usage** report shows which authentication methods are used to sign-in and Using the controls at the top of the list, you can search for a user and filter the list of users based on the columns shown. +>[!NOTE] +>User accounts that were recently deleted, also known as [soft-deleted users](../fundamentals/active-directory-users-restore.md), are not listed in user registration details. + The registration details report shows the following information for each user: - User principal name The registration details report shows the following information for each user: - SSPR Registered (Registered, Not Registered) - SSPR Enabled (Enabled, Not Enabled) - SSPR Capable (Capable, Not Capable) -- Methods registered (Alternate Mobile Phone, Email, FIDO2 Security Key, Hardware OATH token, Microsoft Authenticator app, Microsoft Passwordless phone sign-in, Mobile Phone, Office Phone, Security questions, Software OATH token, Temporary Access Pass, Windows Hello for Business)+- Methods registered (Alternate Mobile Phone, Certificate-based authentication, Email, FIDO2 security key, Hardware OATH token, Microsoft Authenticator app, Microsoft Passwordless phone sign-in, Mobile phone, Office phone, Security questions, Software OATH token, Temporary Access Pass, Windows Hello for Business)  |
active-directory | Howto Authentication Passwordless Phone | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-phone.md | To use passwordless phone sign-in with Microsoft Authenticator, the following pr - For iOS, the device must be registered with each tenant where it's used to sign in. For example, the following device must be registered with Contoso and Wingtiptoys to allow all accounts to sign in: - balas@contoso.com - balas@wingtiptoys.com and bsandhu@wingtiptoys-- For iOS, we recommend enabling the option in Microsoft Authenticator to allow Microsoft to gather usage data. It's not enabled by default. To enable it in Microsoft Authenticator, go to **Settings** > **Usage Data**.- - :::image type="content" border="true" source="./media/howto-authentication-passwordless-phone/telemetry.png" alt-text="Screenshot of Usage Data in Microsoft Authenticator."::: To use passwordless authentication in Azure AD, first enable the combined registration experience, then enable users for the passwordless method. |
active-directory | Onboard Enable Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-enable-tenant.md | To enable Permissions Management in your organization: - You must have an Azure AD tenant. If you don't already have one, [create a free account](https://azure.microsoft.com/free/). - You must be eligible for or have an active assignment to the global administrator role as a user in that tenant. -> [!NOTE] -> During public preview, Permissions Management doesn't perform a license check. -> The public preview environment will only be available until October 7th, 2022. You will be no longer be able view or access your configuration and data in the public preview environment after that date. -> Once you complete all the steps and confirm to use Microsoft Entra Permissions Management, access to the public preview environment will be lost. You can take a note of your configuration before you start. -> To start using generally available Microsoft Entra Permissions Management, you must purchase a license or begin a trial. From the public preview console, initiate the workflow by selecting Start. ---- ## How to enable Permissions Management on your Azure AD tenant 1. In your browser: |
active-directory | Product Privileged Role Insights | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-privileged-role-insights.md | -# View privileged role assignments in your organization (Preview) +# View privileged role assignments in your organization The **Azure AD Insights** tab shows you who is assigned to privileged roles in your organization. You can review a list of identities assigned to a privileged role and learn more about each identity. |
active-directory | Developer Support Help Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/developer-support-help-options.md | Explore the range of [Azure support options and choose the plan](https://azure.m - If you already have an Azure Support Plan, [open a support request here](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). +- If you're using Azure AD for customers (preview), the support request feature is currently unavailable in customer tenants. However, you can use the **Give Feedback** link on the **New support request** page to provide feedback. Or, you can switch to your Azure AD workforce tenant and [open a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). + - If you're not an Azure customer, you can open a support request with [Microsoft Support for business](https://support.serviceshub.microsoft.com/supportforbusiness). ## Post a question to Microsoft Q&A |
active-directory | Optional Claims | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/optional-claims.md | Configure claims using the manifest: 1. When finished, select **Save**. Now the specified optional claims are included in the tokens for your application. -The `oprionalClaims` object declares the optional claims requested by an application. An application can configure optional claims that are returned in ID tokens, access tokens, and SAML 2 tokens. The application can configure a different set of optional claims to be returned in each token type. +The `optionalClaims` object declares the optional claims requested by an application. An application can configure optional claims that are returned in ID tokens, access tokens, and SAML 2 tokens. The application can configure a different set of optional claims to be returned in each token type. | Name | Type | Description | |||-| |
active-directory | Troubleshoot Device Dsregcmd | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/troubleshoot-device-dsregcmd.md | Active Directory Federation Services (AD FS). For hybrid Azure AD-joined devices This field is skipped if no diagnostics information is available. The diagnostics information fields are same as **AcquirePrtDiagnostics** +>[!NOTE] +> The following Cloud Kerberos diagnostics fields were added in the original release of Windows 11 (version 21H2). ++- **OnPremTgt**: Set the state to *YES* if a Cloud Kerberos ticket to access on-premises resources is present on the device for the logged-in user. +- **CloudTgt**: Set the state to *YES* if a Cloud Kerberos ticket to access cloud resources is present on the device for the logged-in user. +- **KerbTopLevelNames**: List of top level Kerberos realm names for Cloud Kerberos. + ### Sample SSO state output ``` The diagnostics information fields are same as **AcquirePrtDiagnostics** EnterprisePrtUpdateTime : 2019-01-24 19:15:33.000 UTC EnterprisePrtExpiryTime : 2019-02-07 19:15:33.000 UTC EnterprisePrtAuthority : https://fs.hybridadfs.nttest.microsoft.com:443/adfs+ OnPremTgt : YES + CloudTgt : YES + KerbTopLevelNames : .windows.net,.windows.net:1433,.windows.net:3342,.azure.net,.azure.net:1433,.azure.net:3342 +-+ ``` This diagnostics section performs the prerequisites check for setting up Windows - **LogonCertTemplateReady**: This setting is specific to WHFB Certificate Trust deployment and present only if the CertEnrollment state is *enrollment authority*. Set the state to *YES* if the state of the login certificate template is valid and helps troubleshoot the AD FS Registration Authority (RA). - **PreReqResult**: Provides the result of all WHFB prerequisites evaluation. Set the state to *Will Provision* if WHFB enrollment would be launched as a post-login task when the user signs in next time. +>[!NOTE] +> The following Cloud Kerberos diagnostics fields were added in the Windows 10 May 2021 update (version 21H1). ++>[!NOTE] +> Prior to Windows 11 version 23H2, the setting **OnPremTGT** was named **CloudTGT**. ++- **OnPremTGT**: This setting is specific to Cloud Kerberos trust deployment and present only if the CertEnrollment state is *none*. Set the state to *YES* if the device has a Cloud Kerberos ticket to access on-premises resources. Prior to Windows 11 version 23H2, this setting was named **CloudTGT**. + ### Sample NGC prerequisites check output ``` |
active-directory | Groups Self Service Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-self-service-management.md | Here are some additional details about these group settings. - If you want to enable some, but not all, of your users to create groups, you can assign those users a role that can create groups, such as [Groups Administrator](../roles/permissions-reference.md#groups-administrator). - These settings are for users and don't impact service principals. For example, if you have a service principal with permissions to create groups, even if you set these settings to **No**, the service principal will still be able to create groups. +## Configure group settings using Microsoft Graph ++To configure the _Users can create security groups in Azure portals, API or PowerShell_ setting using Microsoft Graph, configure the **EnableGroupCreation** object in the groupSettings object. For more information, see [Overview of group settings](/graph/group-directory-settings). ++To configure the _Users can create security groups in Azure portals, API or PowerShell_ setting using Microsoft Graph, update the **allowedToCreateSecurityGroups** property of **defaultUserRolePermissions** in the [authorizationPolicy](/graph/api/resources/authorizationpolicy) object. + ## Next steps These articles provide additional information on Azure Active Directory. |
active-directory | Licensing Service Plan Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md | When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic - **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]->This information last updated on May 11th, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv). +>This information last updated on June 1st, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv). ><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) | When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic | Visio Plan 2 for GCC | VISIOCLIENT_GOV | 4ae99959-6b0f-43b0-b1ce-68146001bdba | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>ONEDRIVE_BASIC_GOV (98709c2e-96b5-4244-95f5-a0ebe139fb8a)<br/>VISIO_CLIENT_SUBSCRIPTION_GOV (f85945f4-7a55-4009-bc39-6a5f14a8eac1)<br/>VISIOONLINE_GOV (8a9ecb07-cfc0-48ab-866c-f83c4d911576) | EXCHANGE FOUNDATION FOR GOVERNMENT (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>ONEDRIVE FOR BUSINESS BASIC FOR GOVERNMENT (98709c2e-96b5-4244-95f5-a0ebe139fb8a)<br/>VISIO DESKTOP APP FOR Government (f85945f4-7a55-4009-bc39-6a5f14a8eac1)<br/>VISIO WEB APP FOR GOVERNMENT (8a9ecb07-cfc0-48ab-866c-f83c4d911576) | | Viva Topics | TOPIC_EXPERIENCES | 4016f256-b063-4864-816e-d818aad600c9 | GRAPH_CONNECTORS_SEARCH_INDEX_TOPICEXP (b74d57b2-58e9-484a-9731-aeccbba954f0)<br/>CORTEX (c815c93d-0759-4bb8-b857-bc921a71be83) | Graph Connectors Search with Index (Viva Topics) (b74d57b2-58e9-484a-9731-aeccbba954f0)<br/>Viva Topics (c815c93d-0759-4bb8-b857-bc921a71be83) | | Windows 10/11 Enterprise E5 (Original) | WIN_ENT_E5 | 1e7e1070-8ccb-4aca-b470-d7cb538cb07e | DATAVERSE_FOR_POWERAUTOMATE_DESKTOP (59231cdf-b40d-4534-a93e-14d0cd31d27e)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>POWERAUTOMATE_DESKTOP_FOR_WIN (2d589a15-b171-4e61-9b5f-31d15eeb2872)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365) | Dataverse for PAD (59231cdf-b40d-4534-a93e-14d0cd31d27e)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>PAD for Windows (2d589a15-b171-4e61-9b5f-31d15eeb2872)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (Original) (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/> Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365) |-| Windows 10 Enterprise A3 for faculty | WIN10_ENT_A3_FAC | 8efbe2f6-106e-442f-97d4-a59aa6037e06 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365) | -| Windows 10 Enterprise A3 for students | WIN10_ENT_A3_STU | d4ef921e-840b-4b48-9a90-ab6698bc7b31 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365) | -| WINDOWS 10 ENTERPRISE E3 | WIN10_PRO_ENT_SUB | cb10e6cd-9da4-4992-867b-67546b1db821 | WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111) | WINDOWS 10 ENTERPRISE (21b439ba-a0ca-424f-a6cc-52f954a5b111) | -| WINDOWS 10 ENTERPRISE E3 | WIN10_VDA_E3 | 6a0f6da5-0b87-4190-a6ae-9bb5a2b9546a | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Windows_Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL PRINT (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WINDOWS 10 ENTERPRISE (NEW) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWS UPDATE FOR BUSINESS DEPLOYMENT SERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Windows Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3) | -| Windows 10 Enterprise E5 | WIN10_VDA_E5 | 488ba24a-39a9-4473-8ee5-19291e71b002 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Windows_Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender For Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Windows Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3) | -| Windows 10 Enterprise E5 Commercial (GCC Compatible) | WINE5_GCC_COMPAT | 938fd547-d794-42a4-996c-1cc206619580 | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118) | Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Microsoft Defender For Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118) | +| Windows 10/11 Enterprise A3 for faculty | WIN10_ENT_A3_FAC | 8efbe2f6-106e-442f-97d4-a59aa6037e06 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365) | +| Windows 10/11 Enterprise A3 for students | WIN10_ENT_A3_STU | d4ef921e-840b-4b48-9a90-ab6698bc7b31 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365) | +| WINDOWS 10/11 ENTERPRISE E3 | WIN10_PRO_ENT_SUB | cb10e6cd-9da4-4992-867b-67546b1db821 | WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111) | WINDOWS 10 ENTERPRISE (21b439ba-a0ca-424f-a6cc-52f954a5b111) | +| WINDOWS 10/11 ENTERPRISE E3 | WIN10_VDA_E3 | 6a0f6da5-0b87-4190-a6ae-9bb5a2b9546a | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Windows_Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL PRINT (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WINDOWS 10 ENTERPRISE (NEW) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWS UPDATE FOR BUSINESS DEPLOYMENT SERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Windows Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3) | +| Windows 10/11 Enterprise E5 | WIN10_VDA_E5 | 488ba24a-39a9-4473-8ee5-19291e71b002 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Windows_Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender For Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Windows Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3) | +| Windows 10/11 Enterprise E5 Commercial (GCC Compatible) | WINE5_GCC_COMPAT | 938fd547-d794-42a4-996c-1cc206619580 | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118) | Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Microsoft Defender For Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118) | | Windows 10/11 Enterprise VDA | E3_VDA_only | d13ef257-988a-46f3-8fce-f47484dd4550 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows_Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>DATAVERSE_FOR_POWERAUTOMATE_DESKTOP (59231cdf-b40d-4534-a93e-14d0cd31d27e)<br/>POWERAUTOMATE_DESKTOP_FOR_WIN (2d589a15-b171-4e61-9b5f-31d15eeb2872) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Dataverse for PAD (59231cdf-b40d-4534-a93e-14d0cd31d27e)<br/>PAD for Windows (2d589a15-b171-4e61-9b5f-31d15eeb2872) | | Windows 365 Business 1 vCPU 2 GB 64 GB | CPC_B_1C_2RAM_64GB | 816eacd3-e1e3-46b3-83c8-1ffd37e053d9 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>CPC_B_1C_2RAM_64GB (3b98b912-1720-4a1e-9630-c9a41dbb61d8) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Windows 365 Business 1 vCPU, 2 GB, 64 GB (3b98b912-1720-4a1e-9630-c9a41dbb61d8) | | Windows 365 Business 2 vCPU 4 GB 128 GB | CPC_B_2C_4RAM_128GB | 135bee78-485b-4181-ad6e-40286e311850 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>CPC_B_2C_4RAM_128GB (1a13832e-cd79-497d-be76-24186f55c8b0) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Windows 365 Business 2 vCPU, 4 GB, 128 GB (1a13832e-cd79-497d-be76-24186f55c8b0) | |
active-directory | Concept Authentication Methods Customers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/concept-authentication-methods-customers.md | The following screenshots show the sign-in with Facebook experience. In the sign Learn how to [add Facebook as an identity provider](how-to-facebook-federation-customers.md). +### Updating sign-in methods ++At any time, you can update the sign-in options you've selected for an app. For example, you can add social identity providers or change the local account sign-in method. ++Be aware that when you change sign-in methods, the change affects only new users. Existing users will continue to sign in using their original method. For example, suppose you start out with the email and password sign-in method, and then change to email with one-time passcode. New users will sign in using a one-time passcode, but any users who have already signed up with an email and password will continue to be prompted for their email and password. + ## Next steps To learn how to add identity providers for sign-in to your applications, refer to the following articles: |
active-directory | Concept Branding Customers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/concept-branding-customers.md | -# Customize the neutral default authentication experience for the customer tenant +# Customize the neutral default authentication experience for the customer tenant (preview) After creating a new customer tenant, you can customize the appearance of your web-based applications for customers who sign in or sign up, to personalize their end-user experience. In Azure AD, the default Microsoft branding will appear in your sign-in pages before you customize any settings. This branding represents the global look and feel that applies across all sign-ins to your tenant. The following image displays the neutral default branding of the customer tenant For more information, see [Customize the neutral branding in your customer tenant](how-to-customize-branding-customers.md). + ## Text customization You might have different requirements for the information you want to collect during sign-up and sign-in. The customer tenant comes with a built-in set of information stored in attributes, such as Given Name, Surname, City, and Postal Code. In the customer tenant, we have two options to add custom text to the sign-up and sign-in experience. The function is available under each user flow during language customization and also under **Company Branding**. Although we have two ways to customize strings, both ways modify the same JSON file. The most recent change made either via **User flows** or via **Company Branding** will always override the previous one. |
active-directory | Concept Planning Your Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/concept-planning-your-solution.md | Title: Plan CIAM deployment description: Learn how to plan your CIAM deployment. -+ Previously updated : 05/24/2023- Last updated : 05/31/2023+ -# Planning for customer identity and access management +# Planning for customer identity and access management (preview) -Azure Active Directory (Azure AD) for customers is a customizable, extensible solution for adding customer identity and access management (CIAM) to your app. Because it's built on the Azure AD platform, you benefit from consistency in app integration, tenant management, and operations across your workforce and customer scenarios. When designing your configuration, it's important to understand the components of a customer tenant and the Azure AD features that are available for your customer scenarios. +Microsoft Entra External ID for customers, also known as Azure Active Directory (Azure AD) for customers, is a customizable, extensible solution for adding customer identity and access management (CIAM) to your app. Because it's built on the Azure AD platform, you benefit from consistency in app integration, tenant management, and operations across your workforce and customer scenarios. When designing your configuration, it's important to understand the components of a customer tenant and the Azure AD features that are available for your customer scenarios. + This article provides a general framework for integrating your app and configuring Azure AD for customers. It describes the capabilities available in a customer tenant and outlines the important planning considerations for each step in your integration. When sign-up is complete, Azure AD generates a token and redirects the customer When planning your sign-up and sign-in experience, determine your requirements: -- **Number of user flows**. Each application can have just one sign-up and sign-in user flow. If you have several applications, you can use a single user flow for all of them. Or, if you want a different experience for each application, you can create multiple user flows.+- **Number of user flows**. Each application can have just one sign-up and sign-in user flow. If you have several applications, you can use a single user flow for all of them. Or, if you want a different experience for each application, you can create multiple user flows. The maximum is 10 user flows per customer tenant. -- **Company branding and language customizations**. Although we describe configuring company branding and language customizations later in Step 4, you can configure them anytime, either before or after you integrate an app with a user flow. If you configure company branding before you create the user flow, the sign in pages reflect that branding. Otherwise, the sign in pages reflect the default, neutral branding.+- **Company branding and language customizations**. Although we describe configuring company branding and language customizations later in Step 4, you can configure them anytime, either before or after you integrate an app with a user flow. If you configure company branding before you create the user flow, the sign-in pages reflect that branding. Otherwise, the sign-in pages reflect the default, neutral branding. - **Attributes to collect**. In the user flow settings, you can select from a set of built-in user attributes you want to collect from customers. The customer enters the information on the sign-up page, and it's stored with their profile in your directory. If you want to collect more information, you can [define custom attributes](how-to-define-custom-attributes.md) and add them to your user flow. |
active-directory | Concept Supported Features Customers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/concept-supported-features-customers.md | -# Supported features in Azure Active Directory for customers +# Supported features in Azure Active Directory for customers (preview) Azure Active Directory (Azure AD) for customers is designed for businesses that want to make applications available to their customers, using the Microsoft Entra platform for identity and access. With the introduction of this feature, Microsoft Entra now offers two different types of tenants that you can create and manage: Azure Active Directory (Azure AD) for customers is designed for businesses that - A **customer tenant** represents your customer-facing app, resources, and directory of customer accounts. A customer tenant is distinct and separate from your workforce tenant. + ## Compare workforce and customer tenant capabilities Although workforce tenants and customer tenants are built on the same underlying Microsoft Entra platform, there are some feature differences. The following table compares the features available in each type of tenant. |
active-directory | How To Create Customer Tenant Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-create-customer-tenant-portal.md | -# Create a customer identity and access management (CIAM) tenant +# Create a customer identity and access management (CIAM) tenant (preview) Azure Active Directory (Azure AD) offers a customer identity access management (CIAM) solution that lets you create secure, customized sign-in experiences for your customer-facing apps and services. With these built-in CIAM features, Azure AD can serve as the identity provider and access management service for your customer scenarios. You'll need to create a customer tenant in the Microsoft Entra admin center to get started. Once the customer tenant is created, you can access it in both the Microsoft Entra admin center and the Azure portal. In this article, you learn how to: - An Azure subscription. If you don't have one, create a <a href="https://azure.microsoft.com/free/?WT.mc_id=A261C142F" target="_blank">free account</a> before you begin. - An Azure account that's been assigned at least the [Contributor](/azure/role-based-access-control/built-in-roles#contributor) role scoped to the subscription or to a resource group within the subscription. + ## Create a new customer tenant 1. Sign in to your organization's [Microsoft Entra admin center](https://entra.microsoft.com/). |
active-directory | How To Customize Branding Customers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-customize-branding-customers.md | -# Customize the neutral branding in your customer tenant +# Customize the neutral branding in your customer tenant (preview) After creating a new customer tenant, you can customize the end-user experience. Create a custom look and feel for users signing in to your web-based apps by configuring **Company branding** settings for your tenant. With these settings, you can add your own background images, colors, company logos, and text to customize the sign-in experiences across your apps. You can also create user flows programmatically using the Company Branding Graph API. You can also create user flows programmatically using the Company Branding Graph - [Create a user flow](how-to-user-flow-sign-up-sign-in-customers.md) - Review the file size requirements for each image you want to add. You may need to use a photo editor to create the right-sized images. The preferred image type for all images is PNG, but JPG is accepted. + ## Comparing the default sign-in experiences between the customer tenant and the Azure AD tenant The default sign-in experience is the global look and feel that applies across all sign-ins to your tenant. The default branding experiences between the customer tenant and the default Azure AD tenant are distinct. -Your Azure AD tenant supports Microsoft look and feel as a default state for authentication experience. You can [customize the default Microsoft sign-in experience](/azure/active-directory/fundamentals/how-to-customize-branding) with a custom background image or color, favicon, layout, header, and footer. You can also upload a custom CSS. If the custom company branding fails to load for any reason, the sign-in page will revert to the default Microsoft branding. +Your Azure AD tenant supports Microsoft look and feel as a default state for authentication experience. You can [customize the default Microsoft sign-in experience](/azure/active-directory/fundamentals/how-to-customize-branding) with a custom background image or color, favicon, layout, header, and footer. You can also upload a [custom CSS](/azure/active-directory/fundamentals/reference-company-branding-css-template). If the custom company branding fails to load for any reason, the sign-in page will revert to the default Microsoft branding. Microsoft provides a neutral branding as the default for the customer tenant, which can be customized to meet the specific needs of your company. The default branding for the customer tenant is neutral and doesn't include any existing Microsoft branding. If the custom company branding fails to load for any reason, the sign-in page will revert to this neutral branding. It's also possible to add each custom branding property to the custom sign-in page individually. The following image displays the neutral default branding of the customer tenant ## How to customize the default sign-in experience -Before you customize any settings, the neutral default branding will appear in your sign-in and sign-up pages. You can customize this default experience with a custom background image or color, favicon, layout, header, and footer. You can also upload a custom CSS. +Before you customize any settings, the neutral default branding will appear in your sign-in and sign-up pages. You can customize this default experience with a custom background image or color, favicon, layout, header, and footer. You can also upload a [custom CSS](/azure/active-directory/fundamentals/reference-company-branding-css-template). 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/). 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter in the top menu to switch to the customer tenant you created earlier. |
active-directory | How To Manage Customer Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-manage-customer-accounts.md | -# Add and manage customer accounts +# Add and manage customer accounts (preview) There might be scenarios in which you want to manually create customer accounts in your Azure Active Directory customer tenant. Although customer accounts are most commonly created when users sign up to use one of your applications, you can create them programmatically and by using the Microsoft Entra admin center. This article focuses on the Microsoft Entra admin center method of user creation and deletion. To add or delete users, your account must be assigned the *User administrator* o - Understand user accounts in Azure AD for customers. - Understand user roles to control resource access. + ## Create a customer account 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) with Global Administrator or Privileged Role Administrator permissions. |
active-directory | Microsoft Graph Operations Custom Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/microsoft-graph-operations-custom-extensions.md | |
active-directory | Microsoft Graph Operations User Flow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/microsoft-graph-operations-user-flow.md | |
active-directory | Overview Customers Ciam | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/overview-customers-ciam.md | Microsoft Entra External ID for customers, also known as Azure Active Directory :::image type="content" source="media/overview-customers-ciam/overview-ciam.png" alt-text="Diagram showing an overview customer identity and access management." border="false"::: -> [!IMPORTANT] -> Azure AD for customers is currently in preview. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ## Add customized sign-in to your customer-facing apps |
active-directory | Quickstart Trial Setup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/quickstart-trial-setup.md | -# Quickstart: Get started with Azure AD for customers (Preview) +# Quickstart: Get started with Azure AD for customers (preview) Get started with Azure AD for customers (Preview) that lets you create secure, customized sign-in experiences for your customer-facing apps and services. With these built-in customer tenant features, Azure AD for customers can serve as the identity provider and access management service for your customers. During the free trial period, you'll have access to all product features with fe | Group and User management. | :heavy_check_mark: | :heavy_check_mark: | | **Cloud-agnostic solution** with multi-language auth SDK support. | :heavy_check_mark: | :heavy_check_mark: | -> [!IMPORTANT] -> Azure AD for customers is currently in preview. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ## Sign up to your customer tenant free trial Follow the steps below, to download and run the sample app. Follow the articles below to learn more about the configuration the guide created for you or to configure your own apps. You can always come back to the [admin center](https://entra.microsoft.com/) to customize your tenant and explore the full range of configuration options for your tenant. +> [!NOTE] +> The next time you return to your tenant, you might be prompted to set up additional authentication factors for added security of your tenant admin account. + ## Next steps - [Register an app in CIAM](how-to-register-ciam-app.md) - [Customize user experience for your customers](how-to-customize-branding-customers.md) |
active-directory | Troubleshooting Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/troubleshooting-known-issues.md | + + Title: Known issues in customer tenants +description: Learn about known issues in customer tenants. +++++++ Last updated : 05/31/2023++++++# Known issues with Azure Active Directory (Azure AD) for customers ++This article describes known issues that you may experience when you use Azure Active Directory (Azure AD) for customers, and provides help to resolve these issues. ++## Tenant creation and management ++### Tenant creation fails when you choose an unsupported region ++During customer tenant creation, the **Country/Region** dropdown menu lists countries and regions where Azure Azure AD for customers isn't yet available. If you choose Japan or Australia, tenant creation fails. ++**Cause**: Public preview is currently available in the Americas and Europe, with more regions to follow shortly. ++**Workaround**: Select a different region and try again ++### Customer trial tenants can't be extended or linked with an existing Azure subscription ++Customer trial tenants can't be supported beyond 30 days. ++**Workaround**: Take one of the following actions. ++- To continue beyond 30 days, if you're an existing Azure AD customer, [create a new customer tenant](how-to-create-customer-tenant-portal.md) with your subscription. ++- If you donΓÇÖt have an Azure AD account, delete the trial tenant and [set up an Azure free account](https://azure.microsoft.com/free/). ++### The get started guide UI lacks client-side validation for the Domain name field ++When you manually update the autopopulated value for the **Domain name** field, it may appear as though the value is accepted, but then an error occurs. ++**Cause**: Currently there's no client-side validation in the get started guide for setting up a trial tenant. ++**Workaround**: Enter a value that meets the domain name requirements. The **Domain name** field accepts an alphanumeric value with a length of up to 27 characters. ++### Using your admin email to create a local customer account prevents you from administering the tenant ++If you're the admin who created the customer tenant, and you use the same email address as your admin account to create a local customer account in that same tenant, you can't sign in directly to the tenant with admin privileges. ++**Cause**: Using your tenant admin email to create a customer account via self-service sign-up creates a second user with the same email address, but with customer-level privileges. When you sign in to the tenant via `https://entra.microsoft.com/<tenantID>` or `<tenantName>.onmicrosoft.com`, the least-privileged account takes precedence, and you're signed in as the customer instead of the admin. You have insufficient privileges to manage the tenant. ++**Workaround**: Take one of the following actions. ++- When creating a local customer account, use a different email address than the one used by the admin who created the tenant. +- If you've already created a customer account with the same email address as the admin, sign out of the admin center, and then use `https://entra.microsoft.com` instead of `https://entra.microsoft.com/<tenantID>` or `<tenantName>.onmicrosoft.com` to sign in with the correct admin account. ++### Unable to delete your customer tenant ++You get the following error when you try to delete a customer tenant: ++ `Unable to delete tenant` ++**Cause**: This error occurs when you try to delete a customer tenant but you haven't deleted the b2c-extensions-app. ++**Workaround**: When deleting a customer tenant, delete the **b2c-extensions-app**, found in **App registrations** under **All applications**. ++## Branding ++### Device code flows display Microsoft branding instead of custom branding ++The device code flows display Microsoft branding even when you've configured custom branding. ++**Cause**: Device code flows don't yet support custom branding ++**Workaround**: None currently. ++### The sign-up page displays Microsoft branding and "Can't access your account?" ++After you set up a tenant and create a sign-up user flow, you see Microsoft branding instead of neutral branding, along with **Can't access your account?** under the sign-in email box instead of **No account? Create one**. ++**Cause**: The sign-in page for a workforce tenant is displaying instead of sign-in for a customer tenant. This issue can occur when you refresh the sign-in page too many times in quick succession. ++**Workaround**: Wait a few minutes and then refresh. The customer sign-in page should appear. ++## Samples ++### Error when signing in to a sample ++When you follow the get started guide to run a sample and try to sign in as a customer, you see an error message that starts with the following text: ++ `AADSTS50011: The redirect URI specified in the request does not match the redirect URIs configured for the application...` ++**Cause**: This error can occur when there is a replication delay in updating the redirect URI in the app registration. ++**Workaround**: Take one of the following actions. ++- Try running the sample sign in again after a few minutes. +- Check the app registration to confirm that the redirect URI in the error is configured. ++### Error "Invalid client secret providedΓÇ¥ (ASP.NET Core) or ΓÇ£Cannot read properties of undefined (reading 'verifier')ΓÇ¥ (Node.js) ++When you run the ASP.NET Core sample from the get started guide and try to sign in as a customer, you see an error that starts with the following text: ++ `AADSTS7000215: Invalid client secret provided. Ensure the secret being sent in the request is the client secret value, not the client secret ID, for a secret added to app...` ++Or, when you run the Node.js sample, you see an error containing the following line: ++ `TypeError: Cannot read properties of undefined (reading 'verifier')` ++**Cause**: These errors can be caused by a replication delay in updating the secret in the app registration. ++**Workaround**: Take one of the following actions. ++- Try running the sample sign in again after a few minutes. +- Check the app registration to confirm there's a client secret configured and it matches the value in the application configuration. ++## Token version in Web API ++### Error when running a web API ++When you create your own web API in a customer tenant (without using the app creation scripts in the web API samples), and then run it and send an access token, you enable logging and see the following error: ++ `IDX20804: Unable to retrieve document from: https://<tenant>.ciamlogin.com/common/discovery/keys` ++**Cause**: This error occurs if you haven't set the accepted access token version to 2. ++**Workaround**: Do the following. ++1. Go to the app registration for your application. +1. Choose to edit the manifest. +1. Change the **accessTokenAcceptedVersion** property from null to **2**. ++## Next steps ++See also [Supported features in Azure Active Directory for customers](concept-supported-features-customers.md) |
active-directory | Whats New Docs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/whats-new-docs.md | Title: "What's new in Azure Active Directory External Identities" description: "New and updated documentation for the Azure Active Directory External Identities." Previously updated : 04/28/2023 Last updated : 06/01/2023 +## May 2023 ++### New article ++- [Set up tenant restrictions V2 (Preview)](tenant-restrictions-v2.md) ++### Updated articles ++- [Overview: Cross-tenant access with Azure AD External Identities](cross-tenant-access-overview.md) Graph API links were updated. +- [Reset redemption status for a guest user](reset-redemption-status.md) Screenshots were updated. + ## April 2023 ### Updated articles Welcome to what's new in Azure Active Directory External Identities documentatio - [Billing model for Azure AD External Identities](external-identities-pricing.md) - [Tutorial: Enforce multi-factor authentication for B2B guest users](b2b-tutorial-require-mfa.md) -## February 2023 --### Updated articles --- [Email one-time passcode authentication](one-time-passcode.md)-- [Secure your API used an API connector in Azure AD External Identities self-service sign-up user flows](self-service-sign-up-secure-api-connector.md)-- [Azure Active Directory External Identities: What's new](whats-new-docs.md)-- [Authentication and Conditional Access for External Identities](authentication-conditional-access.md)-- [Quickstart: Add a guest user with PowerShell](b2b-quickstart-invite-powershell.md) |
active-directory | Multilateral Federation Baseline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multilateral-federation-baseline.md | Title: University multilateral federation baseline design -description: Baseline design for a multilateral federation solution for universities. +description: Learn about a baseline design for a multilateral federation solution for universities. -Microsoft frequently speaks with research universities that operate in hybrid environments in which applications are either cloud-based or hosted on-premises. In both cases, applications can use different authentication protocols. In some cases, these protocols are reaching end-of-life or are not providing the required level of security. +Microsoft often speaks with research universities that operate in hybrid environments in which applications are either cloud based or hosted on-premises. In both cases, applications can use various authentication protocols. In some cases, these protocols are reaching end of life or aren't providing the required level of security. -[](media/multilateral-federation-baseline/typical-baseline-environment.png#lightbox) +[](media/multilateral-federation-baseline/typical-baseline-environment.png#lightbox) -Applications drive much of the need for different authentication protocols and different identity management mechanisms (IdM). +Applications drive much of the need for different authentication protocols and different identity management (IdM) mechanisms. -In research university environments, research apps often drive IdM requirements. A federation provider, such as Shibboleth, might be used as a primary identity provider (IdP). If this is the case, Azure AD is often configured to federate with Shibboleth. If Microsoft 365 apps are also in use, Azure AD enables you to configure integration. +In research university environments, research apps often drive IdM requirements. A university might use a federation provider, such as Shibboleth, as a primary identity provider (IdP). If so, Azure Active Directory (Azure AD) is often configured to federate with Shibboleth. If Microsoft 365 apps are also in use, Azure AD enables you to configure integration. -Applications used in research universities operate in various portions of the overall IT footprint: +Applications used in research universities operate in various parts of the overall IT footprint: -* Research and multilateral federation applications are made available through InCommon and EduGAIN. +* Research and multilateral federation applications are available through InCommon and eduGAIN. * Library applications provide access to electronic journals and other e-content providers. -* Some applications use legacy authentication protocols such as Central Authentication Service (CAS) to enable single sign-on. +* Some applications use legacy authentication protocols such as Central Authentication Service to enable single sign-on. -* Student and faculty applications often use multiple authentication mechanisms. For example, some are integrated with Shibboleth or other federation providers, while others are integrated with Azure AD. +* Student and faculty applications often use multiple authentication mechanisms. For example, some are integrated with Shibboleth or other federation providers, whereas others are integrated with Azure AD. * Microsoft 365 applications are integrated with Azure AD. -* Windows Server Active Directory (AD) might be in use and synchronized to Azure AD. +* Windows Server Active Directory might be in use and synchronized with Azure AD. -* Lightweight Directory Access Protocol (LDAP) is in use at many universities that might have an external LDAP directory or Identity Registry. These registries are often used to house confidential attributes, role hierarchy information, and even certain types of users, such as applicants. +* Lightweight Directory Access Protocol (LDAP) is in use at many universities that might have an external LDAP directory or identity registry. These registries are often used to house confidential attributes, role hierarchy information, and even certain types of users, such as applicants. -* On-premises AD, or an external LDAP directory, is often used to enable single-credential sign-in for non-web applications and various non-Microsoft operating system sign-ins. +* On-premises Active Directory, or an external LDAP directory, is often used to enable single-credential sign-in for non-web applications and various non-Microsoft operating system sign-ins. ## Baseline architecture challenges -Often, baseline architectures evolve over time, introducing complexity and rigidness to the design and ability to update. Some of the challenges with using the baseline architecture include: +Baseline architectures often evolve over time, introducing complexity and rigidness to the design and the ability to update. Some of the challenges with using the baseline architecture include: -* **Hard to react to new requirements** - Having a complex environment makes it hard to quickly adapt and keep up with the most recent regulations and requirements. For example, if you have apps in lots of different locations and these apps are all connected in different ways with different IdMs, you run into the problem of where to locate multi-factor authentication (MFA) services and how to enforce MFA. Higher education also experiences fragmented service ownership. The people responsible for key services such as enterprise resource planning (ERP), learning management system (LMS), division, and department solutions might resist efforts to change or modify the systems they operate. +* **Hard to react to new requirements**: Having a complex environment makes it hard to quickly adapt and keep up with the most recent regulations and requirements. For example, if you have apps in lots of locations, and these apps are connected in different ways with different IdMs, you have to decide where to locate multifactor authentication (MFA) services and how to enforce MFA. -* **Can't take advantage of all Microsoft 365 capabilities for all apps** (Intune, Conditional Access, passwordless, etc.) - Many universities want to move towards the cloud and leverage their existing investments in Azure AD. However, with a different federation provider as their primary IdP, universities can't take advantage of all the Microsoft 365 capabilities for the rest of their apps. + Higher education also experiences fragmented service ownership. The people responsible for key services such as enterprise resource planning, learning management systems, division, and department solutions might resist efforts to change or modify the systems that they operate. -* **Complexity of solution** - There are many different components to manage, with some components in the cloud and some on-premises or in IaaS instances. Apps are operated in many different places. From a user perspective, this can be a disjointed experience. For example, sometimes users see a Shibboleth login page and other times an Azure AD login page. +* **Can't take advantage of all Microsoft 365 capabilities for all apps** (for example, Intune, Conditional Access, passwordless): Many universities want to move toward the cloud and use their existing investments in Azure AD. However, with a different federation provider as their primary IdP, universities can't take advantage of all the Microsoft 365 capabilities for the rest of their apps. -We present three different solutions, designed to solve these challenges, while also addressing the following requirements: +* **Complexity of a solution**: There are many components to manage. Some components are in the cloud, and some are on-premises or in infrastructure as a service (IaaS) instances. Apps are operated in many places. From a user perspective, this experience can be disjointed. For example, users sometime see a Shibboleth sign-in page and other times see an Azure AD sign-in page. ++We present three solutions to solve these challenges, while also addressing the following requirements: * Ability to participate in multilateral federations such as InCommon and eduGAIN -* Ability to support all types of apps (even those that require legacy protocols) +* Ability to support all types of apps (even apps that require legacy protocols) * Ability to support external directories and attribute stores -These three solutions are presented in order from most preferred to least preferred. Each satisfies requirements but introduces tradeoff decisions expected in a complex architecture. Based on your requirements and starting point, select the one that best suits your environment. A decision tree is provided to help aid in this decision. -+We present the three solutions in order, from most preferred to least preferred. Each satisfies requirements but introduces tradeoff decisions that are expected in a complex architecture. Based on your requirements and starting point, select the one that best suits your environment. We also provide a decision tree to aid in this decision. ## Next steps -See these related multilateral federation articles: +See these related articles about multilateral federation: [Multilateral federation introduction](multilateral-federation-introduction.md) -[Multilateral federation solution one - Azure AD with Cirrus Bridge](multilateral-federation-solution-one.md) +[Multilateral federation Solution 1: Azure AD with Cirrus Bridge](multilateral-federation-solution-one.md) -[Multilateral federation solution two - Azure AD to Shibboleth as SP Proxy](multilateral-federation-solution-two.md) +[Multilateral federation Solution 2: Azure AD with Shibboleth as a SAML proxy](multilateral-federation-solution-two.md) -[Multilateral federation solution three - Azure AD with ADFS and Shibboleth](multilateral-federation-solution-three.md) +[Multilateral federation Solution 3: Azure AD with AD FS and Shibboleth](multilateral-federation-solution-three.md) [Multilateral federation decision tree](multilateral-federation-decision-tree.md)- |
active-directory | Multilateral Federation Decision Tree | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multilateral-federation-decision-tree.md | -Use this decision tree to help you determine the solution best suited for your environment. +Use this decision tree to determine the multilateral federation solution that's best suited for your environment. -[](media/multilateral-federation-decision-tree/tradeoff-decision-matrix.png#lightbox) +[](media/multilateral-federation-decision-tree/tradeoff-decision-matrix.png#lightbox) ## Migration resources -The following are resources to help with your migration to the solutions covered in this content. +The following resources can help with your migration to the solutions covered in this content. -| Migration Resource | Description | Relevant for migrating to... | +| Migration resource | Description | Relevant for migrating to... | | - | - | - |-| [Resources for migrating applications to Azure Active Directory (Azure AD)](../manage-apps/migration-resources.md) | List of resources to help you migrate application access and authentication to Azure AD | Solution 1, Solution 2, and Solution 3 | -| [Azure AD custom claims provider](../develop/custom-claims-provider-overview.md)|This article provides an overview to the Azure AD custom claims provider | Solution 1 | -| [Custom security attributes documentation](../fundamentals/custom-security-attributes-manage.md) | This article describes how to manage access to custom security attributes | Solution 1 | -| [Azure AD SSO integration with Cirrus Identity Bridge](../saas-apps/cirrus-identity-bridge-for-azure-ad-tutorial.md) | Tutorial to integrate Cirrus Identity Bridge for Azure AD with Azure AD | Solution 1 | -| [Cirrus Identity Bridge Overview](https://blog.cirrusidentity.com/documentation/azure-bridge-setup-rev-6.0) | Link to the documentation for the Cirrus Identity Bridge | Solution 1 | -| [Configuring Shibboleth as SAML Proxy](https://shibboleth.atlassian.net/wiki/spaces/KB/pages/1467056889/Using+SAML+Proxying+in+the+Shibboleth+IdP+to+connect+with+Azure+AD) | Link to a Shibboleth article that describes how to use the SAML proxying feature to connect Shibboleth IdP to Azure AD | Solution 2 | -| [Azure MFA deployment considerations](../authentication/howto-mfa-getstarted.md) | Link to guidance for configuring multi-factor authentication (MFA) using Azure AD | Solution 1 and Solution 2 | +| [Resources for migrating applications to Azure Active Directory](../manage-apps/migration-resources.md) | List of resources to help you migrate application access and authentication to Azure Active Directory (Azure AD) | Solution 1, Solution 2, and Solution 3 | +| [Azure AD custom claims provider](../develop/custom-claims-provider-overview.md)| Overview of the Azure AD custom claims provider | Solution 1 | +| [Custom security attributes](../fundamentals/custom-security-attributes-manage.md) | Steps for managing access to custom security attributes | Solution 1 | +| [Azure AD SSO integration with Cirrus Bridge](../saas-apps/cirrus-identity-bridge-for-azure-ad-tutorial.md) | Tutorial to integrate Cirrus Bridge with Azure AD | Solution 1 | +| [Cirrus Bridge overview](https://blog.cirrusidentity.com/documentation/azure-bridge-setup-rev-6.0) | Cirrus Identity documentation for configuring Cirrus Bridge with Azure AD | Solution 1 | +| [Configuring Shibboleth as a SAML proxy](https://shibboleth.atlassian.net/wiki/spaces/KB/pages/1467056889/Using+SAML+Proxying+in+the+Shibboleth+IdP+to+connect+with+Azure+AD) | Shibboleth article that describes how to use the SAML proxying feature to connect the Shibboleth identity provider (IdP) to Azure AD | Solution 2 | +| [Azure AD Multi-Factor Authentication deployment considerations](../authentication/howto-mfa-getstarted.md) | Guidance for configuring Azure AD Multi-Factor Authentication | Solution 1 and Solution 2 | ## Next steps -See these additional multilateral federation articles: +See these related articles about multilateral federation: [Multilateral federation introduction](multilateral-federation-introduction.md) -[Multilateral federation baseline design](multilateral-federation-baseline.md) +[Multilateral federation baseline design](multilateral-federation-baseline.md) -[Multilateral federation solution one - Azure AD with Cirrus Bridge](multilateral-federation-solution-one.md) +[Multilateral federation Solution 1: Azure AD with Cirrus Bridge](multilateral-federation-solution-one.md) -[Multilateral federation solution two - Azure AD to Shibboleth as SP Proxy](multilateral-federation-solution-two.md) +[Multilateral federation Solution 2: Azure AD with Shibboleth as a SAML proxy](multilateral-federation-solution-two.md) -[Multilateral federation solution three - Azure AD with ADFS and Shibboleth](multilateral-federation-solution-three.md) +[Multilateral federation Solution 3: Azure AD with AD FS and Shibboleth](multilateral-federation-solution-three.md) |
active-directory | Multilateral Federation Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multilateral-federation-introduction.md | -Research universities need to collaborate with one another. To accomplish collaboration, they require multilateral federation to enable authentication and access between universities globally. +Research universities need to collaborate with one another. To accomplish collaboration, they require multilateral federation to enable authentication and access between universities globally. ## Challenges with multilateral federation solutions -Universities face many challenges. For example, one university might use one identity management system and a set of protocols while other universities use a different set of technologies, depending on their requirements. In general, universities can: +Universities face many challenges. For example, a university might use one identity management system and a set of protocols. Other universities might use a different set of technologies, depending on their requirements. In general, universities can: -* Use different identity management systems +* Use different identity management systems. -* Use different protocols +* Use different protocols. -* Use customized solutions +* Use customized solutions. -* Require support for a long history of legacy functionality +* Need support for a long history of legacy functionality. -* Need to support solutions that are built in different IT generations +* Need support for solutions that are built in different IT generations. Many universities are also adopting the Microsoft 365 suite of productivity and collaboration tools. These tools rely on Azure Active Directory (Azure AD) for identity management, which enables universities to configure: -* Single sign-on (SSO) across multiple applications +* Single sign-on across multiple applications. -* Modern security controls, including passwordless authentication, MFA, adaptive conditional access, and Identity Protection +* Modern security controls, including passwordless authentication, multifactor authentication, adaptive Conditional Access, and identity protection. -* Enhanced reporting and monitoring +* Enhanced reporting and monitoring. -Because Azure AD doesn't natively support multilateral federation, this content describes three solutions for federating authentication and access between universities with typical research university architecture. In these scenarios, non-Microsoft products are mentioned for illustrative purposes only and represent the broader class of product. For example, Shibboleth is used as an example of a federation provider. +Because Azure AD doesn't natively support multilateral federation, this content describes three solutions for federating authentication and access between universities with a typical research university architecture. These scenarios mention non-Microsoft products for illustrative purposes only and to represent the broader class of products. For example, this content uses Shibboleth as an example of a federation provider. ## Next steps -See these other multilateral federation articles: +See these related articles about multilateral federation: -[Multilateral federation baseline design](multilateral-federation-baseline.md) +[Multilateral federation baseline design](multilateral-federation-baseline.md) -[Multilateral federation solution one - Azure AD with Cirrus Bridge](multilateral-federation-solution-one.md) +[Multilateral federation Solution 1: Azure AD with Cirrus Bridge](multilateral-federation-solution-one.md) -[Multilateral federation solution two - Azure AD to Shibboleth as SP Proxy](multilateral-federation-solution-two.md) +[Multilateral federation Solution 2: Azure AD with Shibboleth as a SAML proxy](multilateral-federation-solution-two.md) -[Multilateral federation solution three - Azure AD with ADFS and Shibboleth](multilateral-federation-solution-three.md) +[Multilateral federation Solution 3: Azure AD with AD FS and Shibboleth](multilateral-federation-solution-three.md) [Multilateral federation decision tree](multilateral-federation-decision-tree.md) |
active-directory | Multilateral Federation Solution One | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multilateral-federation-solution-one.md | Title: University multilateral federation design scenario one -description: First scenario design considerations for a multilateral federation solution for universities. + Title: 'Solution 1: Azure AD with Cirrus Bridge' +description: This article describes design considerations for using Azure AD with Cirrus Bridge as a multilateral federation solution for universities. -In Solution 1, Azure AD is used as the primary IdP for all applications while a managed service provides multilateral federation. In this example, Cirrus Bridge is the managed service used for integration of CAS and multilateral federation apps. +Solution 1 uses Azure Active Directory (Azure AD) as the primary identity provider (IdP) for all applications. A managed service provides multilateral federation. In this example, Cirrus Bridge is the managed service for integration of Central Authentication Service (CAS) and multilateral federation apps. -[](media/multilateral-federation-solution-one/cirrus-bridge.png#lightbox) +[](media/multilateral-federation-solution-one/cirrus-bridge.png#lightbox) -If on-premises Active Directory is also being used, then [AD is configured](../hybrid/whatis-hybrid-identity.md) with hybrid identities. Implementing this Azure AD with Cirrus Bridge solution provides: +If you're also using an on-premises Active Directory instance, you can [configure Active Directory](../hybrid/whatis-hybrid-identity.md) with hybrid identities. Implementing a solution of using Azure AD with Cirrus Bridge provides: -* **A Security Assertion Markup Language (SAML) bridge** - Enables you to configure multilateral federation and participation in InCommon and EduGAIN. The SAML bridge also enables you to configure Azure AD conditional access policies, app assignment, governance, and other features for each multilateral federation app. +* **Security Assertion Markup Language (SAML) bridge**: Configure multilateral federation and participation in InCommon and eduGAIN. You can also use the SAML bridge to configure Azure AD Conditional Access policies, app assignment, governance, and other features for each multilateral federation app. -* **CAS bridge** - Enables you to provide protocol translation to support on-premises CAS apps to authenticate with Azure AD. The CAS bridge enables you to configure Azure AD conditional access policies, app assignment, and governance for all CAS apps, as a whole. +* **CAS bridge**: Provide protocol translation to support on-premises CAS apps to authenticate with Azure AD. You can use the CAS bridge to configure Azure AD Conditional Access policies, app assignment, and governance for all CAS apps as a whole. -Implementing Azure AD with Cirrus bridge enables you to take advantage of more capabilities available in Azure AD: +When you implement Azure AD with Cirrus Bridge, you can take advantage of more capabilities in Azure AD: -* **Custom claims provider support** - [Azure AD custom claims provider](../develop/custom-claims-provider-overview.md) enables you to use an external attribute store (like an external LDAP Directory) to add additional claims into tokens on a per app basis. It uses a custom extension that calls an external REST API to fetch claims from external systems. +* **Custom claims provider support**: With the [Azure AD custom claims provider](../develop/custom-claims-provider-overview.md), you can use an external attribute store (like an external LDAP directory) to add claims into tokens for individual apps. The custom claims provider uses a custom extension that calls an external REST API to fetch claims from external systems. -* **Custom security attributes** - Provides you with the ability to add custom attributes to objects in the directory and control who can read them. [Custom security attributes](../fundamentals/custom-security-attributes-overview.md) enable you to store more of your attributes directly in Azure AD. +* **Custom security attributes**: You can add custom attributes to objects in the directory and control who can read them. [Custom security attributes](../fundamentals/custom-security-attributes-overview.md) enable you to store more of your attributes directly in Azure AD. ## Advantages -The following are some of the advantages of implementing Azure AD with Cirrus bridge: +Here are some of the advantages of implementing Azure AD with Cirrus Bridge: * **Seamless cloud authentication for all apps** - * Elimination of all on-premises identity components can lower your operational effort and potentially reduce security risks. + * All apps authenticate through Azure AD. - * You may realize cost savings resulting from not having to host on-premises infrastructure. -- * This managed solution may help you save on operational administration costs and improve security posture and free up resources for other efforts. + * Elimination of all on-premises identity components in a managed service can potentially lower your operational and administrative costs, reduce security risks, and free up resources for other efforts. * **Streamlined configuration, deployment, and support model** The following are some of the advantages of implementing Azure AD with Cirrus br * You benefit from an established process for configuring and setting up the bridge solution. - * Cirrus Identity provides 24/7 support. --* **Conditional Access (CA) support for multilateral federation apps** + * Cirrus Identity provides continuous support. - * You receive support for [National Institutes of Health (NIH)](https://auth.nih.gov/CertAuthV3/forms/help/compliancecheckhelp.html) and Research and Education FEDerations group (REFEDS). +* **Conditional Access support for multilateral federation apps** - * This solution is the only architecture that enables you to configure granular Azure AD CA for multilateral federation apps. + * Implementation of Conditional Access controls helps you comply with [NIH](https://auth.nih.gov/CertAuthV3/forms/help/compliancecheckhelp.html) and [REFEDS](https://refeds.org/category/research-and-scholarship) requirements. - * Granular CA is supported for both multilateral federation apps and CAS apps. Implementation of CA controls enables you to comply with the [NIH](https://auth.nih.gov/CertAuthV3/forms/help/compliancecheckhelp.html) and [REFEDS](https://refeds.org/category/research-and-scholarship) requirements. + * This solution is the only architecture that enables you to configure granular Azure AD Conditional Access for both multilateral federation apps and CAS apps. -* **Enables you to use other Azure AD-related solutions for all apps** (Intune, AADJ devices, etc.) +* **Use of other Azure AD-related solutions for all apps** - * Enables you to use Azure AD Join for device management. + * You can use Intune and Azure AD join for device management. - * Azure AD Join provides you with the ability to use Autopilot, Azure AD Multi-Factor Authentication, passwordless features, and supports achieving a Zero Trust posture. + * Azure AD join enables you to use Windows Autopilot, Azure AD Multi-Factor Authentication, and passwordless features. Azure AD join supports achieving a Zero Trust posture. -> [!NOTE] -> Switching to Azure AD Multi-Factor Authentication may allow you to realize significant cost savings over other solutions you have in place. + > [!NOTE] + > Switching to Azure AD Multi-Factor Authentication might help you save significant costs over other solutions that you have in place. ## Considerations and trade-offs -The following are some of the trade-offs of using this solution: +Here are some of the trade-offs of using this solution: -* **Limited ability to customize your authentication experience** - This scenario provides a managed solution. Therefore, this solution might not offer you the flexibility or granularity to build a custom solution using federation provider products. +* **Limited ability to customize the authentication experience**: This scenario provides a managed solution. It might not offer you the flexibility or granularity to build a custom solution by using federation provider products. -* **Limited third-party MFA integration** - You might be limited by the number of integrations available to third-party MFA solutions. +* **Limited third-party MFA integration**: The number of integrations available to third-party MFA solutions might be limited. -* **One time integration effort required** - To streamline integration, you need to perform a one-time migration of all student and faculty apps to Azure AD, as well as set up the Cirrus Bridge. +* **One-time integration effort required**: To streamline integration, you need to perform a one-time migration of all student and faculty apps to Azure AD. You also need to set up Cirrus Bridge. -* **Subscription required for Cirrus Bridge** - An annual subscription is required for the Cirrus Bridge. The subscription fee is based on anticipated annual authentication usage of the bridge. +* **Subscription required for Cirrus Bridge**: The subscription fee for Cirrus Bridge is based on anticipated annual authentication usage of the bridge. ## Migration resources -The following are resources to help with your migration to this solution architecture. +The following resources help with your migration to this solution architecture. -| Migration Resource | Description | +| Migration resource | Description | | - | - |-| [Resources for migrating applications to Azure Active Directory (Azure AD)](../manage-apps/migration-resources.md) | List of resources to help you migrate application access and authentication to Azure AD | -| [Azure AD custom claims provider](../develop/custom-claims-provider-overview.md)|This article provides an overview to the Azure AD custom claims provider | -| [Custom security attributes documentation](../fundamentals/custom-security-attributes-manage.md) | This article describes how to manage access to custom security attributes | -| [Azure AD SSO integration with Cirrus Identity Bridge](../saas-apps/cirrus-identity-bridge-for-azure-ad-tutorial.md) | Tutorial to integrate Cirrus Identity Bridge for Azure AD with Azure AD | -| [Cirrus Identity Bridge Overview](https://blog.cirrusidentity.com/documentation/azure-bridge-setup-rev-6.0) | Link to the documentation for the Cirrus Identity Bridge | -| [Azure MFA deployment considerations](../authentication/howto-mfa-getstarted.md) | Link to guidance for configuring multi-factor authentication (MFA) using Azure AD | +| [Resources for migrating applications to Azure Active Directory](../manage-apps/migration-resources.md) | List of resources to help you migrate application access and authentication to Azure AD | +| [Azure AD custom claims provider](../develop/custom-claims-provider-overview.md)| Overview of the Azure AD custom claims provider | +| [Custom security attributes](../fundamentals/custom-security-attributes-manage.md) | Steps for managing access to custom security attributes | +| [Azure AD single sign-on integration with Cirrus Bridge](../saas-apps/cirrus-identity-bridge-for-azure-ad-tutorial.md) | Tutorial to integrate Cirrus Bridge with Azure AD | +| [Cirrus Bridge overview](https://blog.cirrusidentity.com/documentation/azure-bridge-setup-rev-6.0) | Cirrus Identity documentation for configuring Cirrus Bridge with Azure AD | +| [Azure AD Multi-Factor Authentication deployment considerations](../authentication/howto-mfa-getstarted.md) | Guidance for configuring Azure AD Multi-Factor Authentication | ## Next steps -See these other multilateral federation articles: +See these related articles about multilateral federation: [Multilateral federation introduction](multilateral-federation-introduction.md) -[Multilateral federation baseline design](multilateral-federation-baseline.md) +[Multilateral federation baseline design](multilateral-federation-baseline.md) -[Multilateral federation solution two - Azure AD to Shibboleth as SP Proxy](multilateral-federation-solution-two.md) +[Multilateral federation Solution 2: Azure AD with Shibboleth as a SAML proxy](multilateral-federation-solution-two.md) -[Multilateral federation solution three - Azure AD with ADFS and Shibboleth](multilateral-federation-solution-three.md) +[Multilateral federation Solution 3: Azure AD with AD FS and Shibboleth](multilateral-federation-solution-three.md) [Multilateral federation decision tree](multilateral-federation-decision-tree.md) |
active-directory | Multilateral Federation Solution Three | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multilateral-federation-solution-three.md | Title: University multilateral federation design scenario three -description: Third scenario design considerations for a multilateral federation solution for universities. + Title: 'Solution 3: Azure AD with AD FS and Shibboleth' +description: This article describes design considerations for using Azure AD with AD FS and Shibboleth as a multilateral federation solution for universities. -# Solution 3: Azure AD with ADFS and Shibboleth +# Solution 3: Azure AD with AD FS and Shibboleth -In Solution 3, the federation provider is the primary IdP. As shown in this example, Shibboleth is the federation provider for integration of multilateral federation apps, on-premises CAS apps, and any LDAP directories. +In Solution 3, the federation provider is the primary identity provider (IdP). In this example, Shibboleth is the federation provider for the integration of multilateral federation apps, on-premises Central Authentication Service (CAS) apps, and any Lightweight Directory Access Protocol (LDAP) directories. -[](media/multilateral-federation-solution-three/shibboleth-adfs-azure-ad.png#lightbox) +[](media/multilateral-federation-solution-three/shibboleth-adfs-azure-ad.png#lightbox) In this scenario, Shibboleth is the primary IdP. Participation in multilateral federations (for example, with InCommon) is done through Shibboleth, which natively supports this integration. On-premises CAS apps and the LDAP directory are also integrated with Shibboleth. -Student apps, faculty apps, and Microsoft 365 apps are integrated with Azure AD. Any on-premises instance of AD is synced to Azure AD. Active Directory Federated Services (ADFS) is used for third-party multi-factor authentication (MFA) integration. ADFS is also used to perform protocol translation and to enable certain Azure AD features such as Azure AD Join for device management, Autopilot, and passwordless features. +Student apps, faculty apps, and Microsoft 365 apps are integrated with Azure Active Directory (Azure AD). Any on-premises instance of Active Directory is synced with Azure AD. Active Directory Federation Services (AD FS) provides integration with third-party multifactor authentication (MFA). AD FS performs protocol translation and enables certain Azure AD features, such as Azure AD join for device management, Windows Autopilot, and passwordless features. ## Advantages -The following are some of the advantages of using this solution: +Here are some of the advantages of using this solution: -* **Customized authentication** - Enables you to customize the experience for multilateral federation apps through Shibboleth. +* **Customized authentication**: You can customize the experience for multilateral federation apps through Shibboleth. -* **Ease of execution** - Simple to implement in the short-term for institutions already using Shibboleth as their primary IdP. You need to migrate student and faculty apps to Azure AD and add an ADFS instance. +* **Ease of execution**: The solution is simple to implement in the short term for institutions that already use Shibboleth as their primary IdP. You need to migrate student and faculty apps to Azure AD and add an AD FS instance. -* **Minimal disruption** - Allows third-party MFA so you can keep existing MFA solutions such as Duo in place until you're ready for an update. +* **Minimal disruption**: The solution allows third-party MFA. You can keep existing MFA solutions, such as Duo, in place until you're ready for an update. ## Considerations and trade-offs -The following are some of the trade-offs of using this solution: +Here are some of the trade-offs of using this solution: -* **Higher complexity and security risk** - With an on-premises footprint, there may be higher complexity to the environment and extra security risks. There may also be increased overhead and fees associated with managing these on-premises components. +* **Higher complexity and security risk**: An on-premises footprint might mean higher complexity for the environment and extra security risks, compared to a managed service. Increased overhead and fees might also be associated with managing on-premises components. -* **Suboptimal authentication experiences** - For multilateral federation and CAS apps, there's no cloud-based authentication mechanism and there might be multiple redirects. +* **Suboptimal authentication experience**: For multilateral federation and CAS apps, there's no cloud-based authentication mechanism and there might be multiple redirects. -* **No granular CA support** - This solution doesn't provide granular Conditional Access (CA) support. +* **No Azure AD Multi-Factor Authentication support**: This solution doesn't enable Azure AD Multi-Factor Authentication support for multilateral federation or CAS apps. You might miss potential cost savings. -* **No Azure AD Multi-Factor Authentication support** - This solution doesn't enable Azure AD Multi-Factor Authentication support for multilateral federation or CAS apps and might cause you to miss out on potential cost savings. +* **No granular Conditional Access support**: The lack of granular Conditional Access support limits your ability to make granular decisions. -* **Significant ongoing staff allocation** - IT staff must maintain infrastructure and software for the authentication solution. Any staff attrition might introduce risk. +* **Significant ongoing staff allocation**: IT staff must maintain infrastructure and software for the authentication solution. Any staff attrition might introduce risk. ## Migration resources -The following are resources to help with your migration to this solution architecture. +The following resources can help with your migration to this solution architecture. -| Migration Resource | Description | +| Migration resource | Description | | - | - |-| [Resources for migrating applications to Azure Active Directory (Azure AD)](../manage-apps/migration-resources.md) | List of resources to help you migrate application access and authentication to Azure AD | +| [Resources for migrating applications to Azure Active Directory](../manage-apps/migration-resources.md) | List of resources to help you migrate application access and authentication to Azure AD | ## Next steps -See these related multilateral federation articles: +See these related articles about multilateral federation: [Multilateral federation introduction](multilateral-federation-introduction.md) -[Multilateral federation baseline design](multilateral-federation-baseline.md) +[Multilateral federation baseline design](multilateral-federation-baseline.md) -[Multilateral federation solution one - Azure AD with Cirrus Bridge](multilateral-federation-solution-one.md) +[Multilateral federation Solution 1: Azure AD with Cirrus Bridge](multilateral-federation-solution-one.md) -[Multilateral federation solution two - Azure AD to Shibboleth as SP Proxy](multilateral-federation-solution-two.md) +[Multilateral federation Solution 2: Azure AD with Shibboleth as a SAML proxy](multilateral-federation-solution-two.md) [Multilateral federation decision tree](multilateral-federation-decision-tree.md) |
active-directory | Multilateral Federation Solution Two | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multilateral-federation-solution-two.md | Title: University multilateral federation design scenario two -description: Second scenario design considerations for a multilateral federation solution for universities. + Title: 'Solution 2: Azure AD with Shibboleth as a SAML proxy' +description: This article describes design considerations for using Azure AD with Shibboleth as a SAML proxy as a multilateral federation solution for universities. -# Solution 2: Azure AD to Shibboleth as SP Proxy +# Solution 2: Azure AD with Shibboleth as a SAML proxy -In Solution 2, Azure AD acts as the primary IdP and the federation provider acts as a SAML proxy to the CAS apps and the multilateral federation apps. In this example, we show [Shibboleth acting as the SAML proxy](https://shibboleth.atlassian.net/wiki/spaces/KB/pages/1467056889/Using+SAML+Proxying+in+the+Shibboleth+IdP+to+connect+with+Azure+AD) to provide a reference link. +In Solution 2, Azure Active Directory (Azure AD) acts as the primary identity provider (IdP). The federation provider acts as a Security Assertion Markup Language (SAML) proxy to the Central Authentication Service (CAS) apps and the multilateral federation apps. In this example, [Shibboleth acts as the SAML proxy](https://shibboleth.atlassian.net/wiki/spaces/KB/pages/1467056889/Using+SAML+Proxying+in+the+Shibboleth+IdP+to+connect+with+Azure+AD) to provide a reference link. -[](media/multilateral-federation-solution-two/azure-ad-shibboleth-as-sp-proxy.png#lightbox) +[](media/multilateral-federation-solution-two/azure-ad-shibboleth-as-sp-proxy.png#lightbox) -Azure AD is the primary IdP so all student and faculty apps are integrated with Azure AD. All Microsoft 365 apps are also integrated with Azure AD. If Active Directory Domain Services (AD) is in use, then it also is synchronized with Azure AD. +Because Azure AD is the primary IdP, all student and faculty apps are integrated with Azure AD. All Microsoft 365 apps are also integrated with Azure AD. If Azure Active Directory Domain Services is in use, it also is synchronized with Azure AD. The SAML proxy feature of Shibboleth integrates with Azure AD. In Azure AD, Shibboleth appears as a non-gallery enterprise application. Universities can get single sign-on (SSO) for their CAS apps and can participate in the InCommon environment. Additionally, Shibboleth provides integration for Lightweight Directory Access Protocol (LDAP) directory services. ## Advantages -The following are some of the advantages of using this solution: +Advantages of using this solution include: -* **Provides cloud authentication for all apps** - All apps - authenticate through Azure AD. +* **Cloud authentication for all apps**: All apps authenticate through Azure AD. -* **Ease of execution** - This solution provides short-term - ease-of-execution for universities that are already using - Shibboleth. +* **Ease of execution**: This solution provides short-term ease of execution for universities that are already using Shibboleth. ## Considerations and trade-offs -The following are some of the trade-offs of using this solution: +Here are some of the trade-offs of using this solution: -* **Limited authentication experience customization** - There are - limited options for customizing the authentication experience for - end users. +* **Higher complexity and security risk**: An on-premises footprint might mean higher complexity for the environment and extra security risks, compared to a managed service. Increased overhead and fees might also be associated with managing on-premises components. -* **Limited third-party MFA integration** - The number of integrations - available to third-party MFA solutions might be limited. +* **Suboptimal authentication experience**: For multilateral federation and CAS apps, the authentication experience for users might not be seamless because of redirects through Shibboleth. The options for customizing the authentication experience for users are limited. -* **Higher complexity and security risk** - With an on-premises - footprint, there might be higher complexity to the environment and - extra security risks. There might also be increased overhead - and fees associated with managing these on-premises components. +* **Limited third-party multifactor authentication (MFA) integration**: The number of integrations available to third-party MFA solutions might be limited. -* **Suboptimal authentication experiences** - For multilateral - federation and CAS apps, the authentication experience for end users - might be suboptimal due to redirects through Shibboleth. --* **No granular CA support** - This solution doesn't provide - granular Conditional Access (CA) support, meaning that you would - have to decide on either the least common denominator (optimize for - less friction, but limited security controls) or the highest common - denominator (optimize for security controls, but at the expense of - user friction) with limited ability to make granular decisions. +* **No granular Conditional Access support**: Without granular Conditional Access support, you have to choose between the least common denominator (optimize for less friction but have limited security controls) or the highest common denominator (optimize for security controls at the expense of user friction). Your ability to make granular decisions is limited. ## Migration resources -The following are resources to help with your migration to this solution architecture. +The following resources can help with your migration to this solution architecture. -| Migration Resource | Description | +| Migration resource | Description | | - | - |-| [Resources for migrating applications to Azure Active Directory (Azure AD)](../manage-apps/migration-resources.md) | List of resources to help you migrate application access and authentication to Azure AD | -| [Configuring Shibboleth as SAML Proxy](https://shibboleth.atlassian.net/wiki/spaces/KB/pages/1467056889/Using+SAML+Proxying+in+the+Shibboleth+IdP+to+connect+with+Azure+AD) | Link to a Shibboleth article that describes how to use the SAML proxying feature to connect Shibboleth IdP to Azure AD | -| [Azure MFA deployment considerations](../authentication/howto-mfa-getstarted.md) | Link to guidance for configuring multi-factor authentication (MFA) using Azure AD | +| [Resources for migrating applications to Azure Active Directory](../manage-apps/migration-resources.md) | List of resources to help you migrate application access and authentication to Azure AD | +| [Configuring Shibboleth as a SAML proxy](https://shibboleth.atlassian.net/wiki/spaces/KB/pages/1467056889/Using+SAML+Proxying+in+the+Shibboleth+IdP+to+connect+with+Azure+AD) | Shibboleth article that describes how to use the SAML proxying feature to connect the Shibboleth IdP to Azure AD | +| [Azure AD Multi-Factor Authentication deployment considerations](../authentication/howto-mfa-getstarted.md) | Guidance for configuring Azure AD Multi-Factor Authentication | ## Next steps -See these other multilateral federation articles: +See these related articles about multilateral federation: [Multilateral federation introduction](multilateral-federation-introduction.md) -[Multilateral federation baseline design](multilateral-federation-baseline.md) +[Multilateral federation baseline design](multilateral-federation-baseline.md) -[Multilateral federation solution one - Azure AD with Cirrus Bridge](multilateral-federation-solution-one.md) +[Multilateral federation Solution 1: Azure AD with Cirrus Bridge](multilateral-federation-solution-one.md) -[Multilateral federation solution three - Azure AD with ADFS and Shibboleth](multilateral-federation-solution-three.md) +[Multilateral federation Solution 3: Azure AD with AD FS and Shibboleth](multilateral-federation-solution-three.md) [Multilateral federation decision tree](multilateral-federation-decision-tree.md) |
active-directory | Whats New Sovereign Clouds Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-sovereign-clouds-archive.md | |
active-directory | Whats New Sovereign Clouds | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-sovereign-clouds.md | |
active-directory | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md | |
active-directory | Check Status Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/check-status-workflow.md | |
active-directory | Check Workflow Execution Scope | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/check-workflow-execution-scope.md | |
active-directory | Configure Logic App Lifecycle Workflows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/configure-logic-app-lifecycle-workflows.md | |
active-directory | Create Lifecycle Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-lifecycle-workflow.md | |
active-directory | Customize Workflow Email | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/customize-workflow-email.md | |
active-directory | Customize Workflow Schedule | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/customize-workflow-schedule.md | |
active-directory | Delete Lifecycle Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/delete-lifecycle-workflow.md | |
active-directory | Entitlement Management Access Package Approval Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-approval-policy.md | |
active-directory | Entitlement Management Access Package Assignments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-assignments.md | |
active-directory | Entitlement Management Access Package Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-create.md | |
active-directory | Entitlement Management Access Package Edit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-edit.md | |
active-directory | Entitlement Management Access Package First | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-first.md | |
active-directory | Entitlement Management Access Package Incompatible | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-incompatible.md | |
active-directory | Entitlement Management Access Package Lifecycle Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-lifecycle-policy.md | |
active-directory | Entitlement Management Access Package Request Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-request-policy.md | |
active-directory | Entitlement Management Access Package Requests | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-requests.md | |
active-directory | Entitlement Management Access Package Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-resources.md | If you need to add resources to an access package, you should check whether the **Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager -1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**. +1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. -1. In the left menu, click **Catalog** and then open the catalog. +1. In the left menu, select **Catalog** and then open the catalog. -1. In the left menu, click **Resources** to see the list of resources in this catalog. +1. In the left menu, select **Resources** to see the list of resources in this catalog.  1. If the resources aren't already in the catalog, and you're an administrator or a catalog owner, you can [add resources to a catalog](entitlement-management-catalog-create.md#add-resources-to-a-catalog). The types of resources you can add are groups, applications, and SharePoint Online sites. For example: -* Groups can be cloud-created Microsoft 365 Groups or cloud-created Azure AD security groups. Groups that originate in an on-premises Active Directory can't be assigned as resources because their owner or member attributes can't be changed in Azure AD. To give users access to an application that uses AD security group memberships, create a new group in Azure AD, configure [group writeback to AD](../hybrid/how-to-connect-group-writeback-v2.md), and [enable that group to be written to AD](../enterprise-users/groups-write-back-portal.md). Groups that originate in Exchange Online as Distribution groups can't be modified in Azure AD either. -* Applications can be Azure AD enterprise applications, which include both software as a service (SaaS) applications and your own applications integrated with Azure AD. If your application has not yet been integrated with Azure AD, see [govern access for applications in your environment](identity-governance-applications-prepare.md) and [integrate an application with Azure AD](identity-governance-applications-integrate.md). -* Sites can be SharePoint Online sites or SharePoint Online site collections. + * Groups can be cloud-created Microsoft 365 Groups or cloud-created Azure AD security groups. Groups that originate in an on-premises Active Directory can't be assigned as resources because their owner or member attributes can't be changed in Azure AD. To give users access to an application that uses AD security group memberships, create a new group in Azure AD, configure [group writeback to AD](../hybrid/how-to-connect-group-writeback-v2.md), and [enable that group to be written to AD](../enterprise-users/groups-write-back-portal.md). Groups that originate in Exchange Online as Distribution groups can't be modified in Azure AD either. + * Applications can be Azure AD enterprise applications, which include both software as a service (SaaS) applications and your own applications integrated with Azure AD. If your application hasn't yet been integrated with Azure AD, see [govern access for applications in your environment](identity-governance-applications-prepare.md) and [integrate an application with Azure AD](identity-governance-applications-integrate.md). + * Sites can be SharePoint Online sites or SharePoint Online site collections. -1. If you are an access package manager and you need to add resources to the catalog, you can ask the catalog owner to add them. +1. If you're an access package manager and you need to add resources to the catalog, you can ask the catalog owner to add them. ## Add resource roles -A resource role is a collection of permissions associated with a resource. Resources can be made available for users to request if you add resource roles from each of the catalog's resources to your access package. You can add resource roles that are provided by groups, teams, applications, and SharePoint sites. When a user receives an assignment to an access package, they'll be added to all the resource roles in the access package. +A resource role is a collection of permissions associated with a resource. Resources can be made available for users to request if you add resource roles from each of the catalog's resources to your access package. You can add resource roles that are provided by groups, teams, applications, and SharePoint sites. When a user receives an assignment to an access package, they are added to all the resource roles in the access package. -If you want some users to receive different roles than others, then you'll need to create multiple access packages in the catalog, with separate access packages for each of the resource roles. You can also mark the access packages as [incompatible](entitlement-management-access-package-incompatible.md) with each other so users can't request access to access packages that would give them excessive access. +If you want some users to receive different roles than others, then you need to create multiple access packages in the catalog, with separate access packages for each of the resource roles. You can also mark the access packages as [incompatible](entitlement-management-access-package-incompatible.md) with each other so users can't request access to access packages that would give them excessive access. **Prerequisite role:** Global administrator, User administrator, Catalog owner, or Access package manager -1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**. +1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. -1. In the left menu, click **Access packages** and then open the access package. +1. In the left menu, select **Access packages** and then open the access package. -1. In the left menu, click **Resource roles**. +1. In the left menu, select **Resource roles**. -1. Click **Add resource roles** to open the Add resource roles to access package page. +1. Select **Add resource roles** to open the Add resource roles to access package page.  If you want some users to receive different roles than others, then you'll need ## Add a group or team resource role -You can have entitlement management automatically add users to a group or a team in Microsoft Teams when they are assigned an access package. +You can have entitlement management automatically add users to a group or a team in Microsoft Teams when they're assigned an access package. - When a group or team is part of an access package and a user is assigned to that access package, the user is added to that group or team, if not already present.-- When a user's access package assignment expires, they are removed from the group or team, unless they currently have an assignment to another access package that includes that same group or team.+- When a user's access package assignment expires, they're removed from the group or team, unless they currently have an assignment to another access package that includes that same group or team. -You can select any [Azure AD security group or Microsoft 365 Group](../fundamentals/active-directory-groups-create-azure-portal.md). Administrators can add any group to a catalog; catalog owners can add any group to the catalog if they are owner of the group. Keep the following Azure AD constraints in mind when selecting a group: +You can select any [Azure AD security group or Microsoft 365 Group](../fundamentals/active-directory-groups-create-azure-portal.md). Administrators can add any group to a catalog; catalog owners can add any group to the catalog if they're owner of the group. Keep the following Azure AD constraints in mind when selecting a group: - When a user, including a guest, is added as a member to a group or team, they can see all the other members of that group or team.-- Azure AD cannot change the membership of a group that was synchronized from Windows Server Active Directory using Azure AD Connect, or that was created in Exchange Online as a distribution group. -- The membership of dynamic groups cannot be updated by adding or removing a member, so dynamic group memberships are not suitable for use with entitlement management.-- M365 groups have additional constraints, described in the [overview of Microsoft 365 Groups for administrators](/microsoft-365/admin/create-groups/office-365-groups), including a limit of 100 owners per group, limits on how many members can access Group conversations concurrently, and 7000 groups per member.+- Azure AD can't change the membership of a group that was synchronized from Windows Server Active Directory using Azure AD Connect, or that was created in Exchange Online as a distribution group. +- The membership of dynamic groups can't be updated by adding or removing a member, so dynamic group memberships aren't suitable for use with entitlement management. +- Microsoft 365 groups have additional constraints, described in the [overview of Microsoft 365 Groups for administrators](/microsoft-365/admin/create-groups/office-365-groups), including a limit of 100 owners per group, limits on how many members can access Group conversations concurrently, and 7000 groups per member. For more information, see [Compare groups](/office365/admin/create-groups/compare-groups) and [Microsoft 365 Groups and Microsoft Teams](/microsoftteams/office-365-groups). -1. On the **Add resource roles to access package** page, click **Groups and Teams** to open the Select groups pane. +1. On the **Add resource roles to access package** page, select **Groups and Teams** to open the Select groups pane. 1. Select the groups and teams you want to include in the access package.  -1. Click **Select**. +1. Select **Select**. - Once you select the group or team, the **Sub type** column will list one of the following subtypes: + Once you select the group or team, the **Sub type** column lists one of the following subtypes: | Sub type | Description | | | | | Security | Used for granting access to resources. | | Distribution | Used for sending notifications to a group of people. |- | Microsoft 365 | Microsoft 365 Group that is not Teams-enabled. Used for collaboration between users, both inside and outside your company. | + | Microsoft 365 | Microsoft 365 Group that isn't Teams-enabled. Used for collaboration between users, both inside and outside your company. | | Team | Microsoft 365 Group that is Teams-enabled. Used for collaboration between users, both inside and outside your company. | 1. In the **Role** list, select **Owner** or **Member**. For more information, see [Compare groups](/office365/admin/create-groups/compar  -1. Click **Add**. +1. Select **Add**. - Any users with existing assignments to the access package will automatically become members of this group or team when it is added. + Any users with existing assignments to the access package will automatically become members of this group or team when it's added. ## Add an application resource role -You can have Azure AD automatically assign users access to an Azure AD enterprise application, including both SaaS applications and your organization's applications integrated with Azure AD, when a user is assigned an access package. For applications that integrate with Azure AD through federated single sign-on, Azure AD will issue federation tokens for users assigned to the application. +You can have Azure AD automatically assign users access to an Azure AD enterprise application, including both SaaS applications and your organization's applications integrated with Azure AD, when a user is assigned an access package. For applications that integrate with Azure AD through federated single sign-on, Azure AD issues federation tokens for users assigned to the application. -Applications can have multiple app roles defined in their manifest. When you add an application to an access package, if that application has more than one app role, you'll need to specify the appropriate role for those users in each access package. If you're developing applications, you can read more about how those roles are added to your applications in [How to: Configure the role claim issued in the SAML token for enterprise applications](../develop/active-directory-enterprise-app-role-management.md). +Applications can have multiple app roles defined in their manifest. When you add an application to an access package, if that application has more than one app role, you need to specify the appropriate role for those users in each access package. If you're developing applications, you can read more about how those roles are added to your applications in [How to: Configure the role claim issued in the SAML token for enterprise applications](../develop/active-directory-enterprise-app-role-management.md). > [!NOTE] > If an application has multiple roles, and more than one role of that application are in an access package, then the user will receive all those application's roles. If instead you want users to only have some of the application's roles, then you will need to create multiple access packages in the catalog, with separate access packages for each of the application roles. Applications can have multiple app roles defined in their manifest. When you add Once an application role is part of an access package: - When a user is assigned that access package, the user is added to that application role, if not already present.-- When a user's access package assignment expires, their access will be removed from the application, unless they have an assignment to another access package that includes that application role.+- When a user's access package assignment expires, their access is removed from the application, unless they have an assignment to another access package that includes that application role. Here are some considerations when selecting an application: -- Applications may also have groups assigned to their app roles as well. You can choose to add a group in place of an application role in an access package, however then the application will not be visible to the user as part of the access package in the My Access portal.-- Azure portal may also show service principals for services that cannot be selected as applications. In particular, **Exchange Online** and **SharePoint Online** are services, not applications that have resource roles in the directory, so they cannot be included in an access package. Instead, use group-based licensing to establish an appropriate license for a user who needs access to those services.-- Applications which only support Personal Microsoft Account users for authentication, and do not support organizational accounts in your directory, do not have application roles and cannot be added to access package catalogs.+- Applications may also have groups assigned to their app roles as well. You can choose to add a group in place of an application role in an access package, however then the application won't be visible to the user as part of the access package in the My Access portal. +- Azure portal may also show service principals for services that can't be selected as applications. In particular, **Exchange Online** and **SharePoint Online** are services, not applications that have resource roles in the directory, so they can't be included in an access package. Instead, use group-based licensing to establish an appropriate license for a user who needs access to those services. +- Applications that only support Personal Microsoft Account users for authentication, and don't support organizational accounts in your directory, don't have application roles and can't be added to access package catalogs. -1. On the **Add resource roles to access package** page, click **Applications** to open the Select applications pane. +1. On the **Add resource roles to access package** page, select **Applications** to open the Select applications pane. 1. Select the applications you want to include in the access package.  -1. Click **Select**. +1. Select **Select**. 1. In the **Role** list, select an application role.  -1. Click **Add**. +1. Select **Add**. - Any users with existing assignments to the access package will automatically be given access to this application when it is added. + Any users with existing assignments to the access package will automatically be given access to this application when it's added. ## Add a SharePoint site resource role -Azure AD can automatically assign users access to a SharePoint Online site or SharePoint Online site collection when they are assigned an access package. +Azure AD can automatically assign users access to a SharePoint Online site or SharePoint Online site collection when they're assigned an access package. -1. On the **Add resource roles to access package** page, click **SharePoint sites** to open the Select SharePoint Online sites pane. +1. On the **Add resource roles to access package** page, select **SharePoint sites** to open the Select SharePoint Online sites pane. :::image type="content" source="media/entitlement-management-access-package-resources/resource-sharepoint-add.png" alt-text="Access package - Add resource roles - Select SharePoint sites - Portal view"::: Azure AD can automatically assign users access to a SharePoint Online site or Sh  -1. Click **Select**. +1. Select **Select**. 1. In the **Role** list, select a SharePoint Online site role.  -1. Click **Add**. +1. Select **Add**. - Any users with existing assignments to the access package will automatically be given access to this SharePoint Online site when it is added. + Any users with existing assignments to the access package will automatically be given access to this SharePoint Online site when it's added. ## Add resource roles programmatically There are two ways to add a resource role to an access package programmatically, You can add a resource role to an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission can call the API to: -1. [List the accessPackageResources in the catalog](/graph/api/entitlementmanagement-list-accesspackagecatalogs?tabs=http&view=graph-rest-beta&preserve-view=true) and [create an accessPackageResourceRequest](/graph/api/entitlementmanagement-post-accesspackageresourcerequests?tabs=http&view=graph-rest-beta&preserve-view=true) for any resources that are not yet in the catalog. +1. [List the accessPackageResources in the catalog](/graph/api/entitlementmanagement-list-accesspackagecatalogs?tabs=http&view=graph-rest-beta&preserve-view=true) and [create an accessPackageResourceRequest](/graph/api/entitlementmanagement-post-accesspackageresourcerequests?tabs=http&view=graph-rest-beta&preserve-view=true) for any resources that aren't yet in the catalog. 1. [List the accessPackageResourceRoles](/graph/api/accesspackage-list-accesspackageresourcerolescopes?tabs=http&view=graph-rest-beta&preserve-view=true) of each accessPackageResource in an accessPackageCatalog. This list of roles will then be used to select a role, when subsequently creating an accessPackageResourceRoleScope. 1. [Create an accessPackageResourceRoleScope](/graph/api/accesspackage-post-accesspackageresourcerolescopes?tabs=http&view=graph-rest-beta&preserve-view=true) for each resource role needed in the access package. New-MgEntitlementManagementAccessPackageResourceRoleScope -AccessPackageId $apid **Prerequisite role:** Global administrator, User administrator, Catalog owner, or Access package manager -1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**. +1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. -1. In the left menu, click **Access packages** and then open the access package. +1. In the left menu, select **Access packages** and then open the access package. -1. In the left menu, click **Resource roles**. +1. In the left menu, select **Resource roles**. 1. In the list of resource roles, find the resource role you want to remove. -1. Click the ellipsis (**...**) and then click **Remove resource role**. +1. Select the ellipsis (**...**) and then select **Remove resource role**. - Any users with existing assignments to the access package will automatically have their access revoked to this resource role when it is removed. + Any users with existing assignments to the access package will automatically have their access revoked to this resource role when it's removed. ## When changes are applied -In entitlement management, Azure AD will process bulk changes for assignment and resources in your access packages several times a day. So, if you make an assignment, or change the resource roles of your access package, it can take up to 24 hours for that change to be made in Azure AD, plus the amount of time it takes to propagate those changes to other Microsoft Online Services or connected SaaS applications. If your change affects just a few objects, the change will likely only take a few minutes to apply in Azure AD, after which other Azure AD components will then detect that change and update the SaaS applications. If your change affects thousands of objects, the change will take longer. For example, if you have an access package with 2 applications and 100 user assignments, and you decide to add a SharePoint site role to the access package, there may be a delay until all the users are part of that SharePoint site role. You can monitor the progress through the Azure AD audit log, the Azure AD provisioning log, and the SharePoint site audit logs. +In entitlement management, Azure AD processes bulk changes for assignment and resources in your access packages several times a day. So, if you make an assignment, or change the resource roles of your access package, it can take up to 24 hours for that change to be made in Azure AD, plus the amount of time it takes to propagate those changes to other Microsoft Online Services or connected SaaS applications. If your change affects just a few objects, the change will likely only take a few minutes to apply in Azure AD, after which other Azure AD components will then detect that change and update the SaaS applications. If your change affects thousands of objects, the change takes longer. For example, if you have an access package with 2 applications and 100 user assignments, and you decide to add a SharePoint site role to the access package, there may be a delay until all the users are part of that SharePoint site role. You can monitor the progress through the Azure AD audit log, the Azure AD provisioning log, and the SharePoint site audit logs. -When you remove a member of a team, they are removed from the Microsoft 365 Group as well. Removal from the team's chat functionality might be delayed. For more information, see [Group membership](/microsoftteams/office-365-groups#group-membership). +When you remove a member of a team, they're removed from the Microsoft 365 Group as well. Removal from the team's chat functionality might be delayed. For more information, see [Group membership](/microsoftteams/office-365-groups#group-membership). -When a resource role is added to an access package by an admin, users who are in that resource role, but do not have assignments to the access package, will remain in the resource role, but won't be assigned to the access package. For example, if a user is a member of a group and then an access package is created and that group's member role is added to an access package, the user won't automatically receive an assignment to the access package. +When a resource role is added to an access package by an admin, users who are in that resource role, but don't have assignments to the access package, will remain in the resource role, but won't be assigned to the access package. For example, if a user is a member of a group and then an access package is created and that group's member role is added to an access package, the user won't automatically receive an assignment to the access package. -If you want the users to also be assigned to the access package, you can [directly assign users](entitlement-management-access-package-assignments.md#directly-assign-a-user) to an access package using the Azure portal, or in bulk via Graph or PowerShell. The users will then also receive access to the other resource roles in the access package. However, as those users already have access prior to being added to the access package, when their access package assignment is removed, they will remain in the resource role. For example, if a user was a member of a group, and was assigned to an access package that included group membership for that group as a resource role, and then that user's access package assignment was removed, the user would retain their group membership. +If you want the users to also be assigned to the access package, you can [directly assign users](entitlement-management-access-package-assignments.md#directly-assign-a-user) to an access package using the Azure portal, or in bulk via Graph or PowerShell. The users will then also receive access to the other resource roles in the access package. However, as those users already have access prior to being added to the access package, when their access package assignment is removed, they remain in the resource role. For example, if a user was a member of a group, and was assigned to an access package that included group membership for that group as a resource role, and then that user's access package assignment was removed, the user would retain their group membership. ## Next steps |
active-directory | Entitlement Management Access Package Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-settings.md | -Most users in your directory can sign in to the My Access portal and automatically see a list of access packages they can request. However, for external business partner users that are not yet in your directory, you will need to send them a link that they can use to request an access package. +Most users in your directory can sign in to the My Access portal and automatically see a list of access packages they can request. However, for external business partner users that aren't yet in your directory, you'll need to send them a link that they can use to request an access package. As long as the catalog for the access package is [enabled for external users](entitlement-management-catalog-create.md) and you have a [policy for the external user's directory](entitlement-management-access-package-request-policy.md), the external user can use the My Access portal link to request the access package. As long as the catalog for the access package is [enabled for external users](en **Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager -1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**. +1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. -1. In the left menu, click **Access packages** and then open the access package. +1. In the left menu, select **Access packages** and then open the access package. 1. On the Overview page, copy the **My Access portal link**.  - It is important that you copy the entire My Access portal link when sending it to an internal business partner. This ensures that the partner will get access to your directory's portal to make their request. The link starts with `myaccess`, includes a directory hint, and ends with an access package ID. (For US Government, the domain in the My Access portal link will be `myaccess.microsoft.us`.) + It's important that you copy the entire My Access portal link when sending it to an internal business partner. This ensures that the partner gets access to your directory's portal to make their request. The link starts with `myaccess`, includes a directory hint, and ends with an access package ID. (For US Government, the domain in the My Access portal link will be `myaccess.microsoft.us`.) `https://myaccess.microsoft.com/@<directory_hint>#/access-packages/<access_package_id>` |
active-directory | Entitlement Management Access Reviews Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-reviews-create.md | |
active-directory | Entitlement Management Access Reviews Self Review | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-reviews-self-review.md | Entitlement management simplifies how enterprises manage access to groups, appli To do an access review, you must first open the access review. Use the following procedure to find and open the access review: -1. You may receive an email from Microsoft that asks you to review access. Locate the email to open the access review. Here is an example of an email requesting a review of access: +1. You may receive an email from Microsoft that asks you to review access. Locate the email to open the access review. Here's an example of an email requesting a review of access:  -1. Click the **Review access** link. +1. Select the **Review access** link. 1. You can also go directly to https://myaccess.microsoft.com to find your pending access reviews if you don't receive an email. (For US Government, use `https://myaccess.microsoft.us` instead.) -1. Click **Access reviews** on the left navigation bar to see a list of pending access reviews assigned to you. +1. Select **Access reviews** on the left navigation bar to see a list of pending access reviews assigned to you. -1. Click the review that youΓÇÖd like to begin. +1. Select the review that youΓÇÖd like to begin. ## Perform the access review Once you open the access review, you can see your access. Use the following proc 1. Decide whether you still need access to the access package. For example, the project you're working on isn't complete, so you still need access to continue working on the project. -1. Click **Yes** to keep your access or click **No** to remove your access. +1. Select **Yes** to keep your access or select **No** to remove your access. >[!NOTE] >If you stated that you no longer need access, you aren't removed from the access package immediately. You will be removed from the access package when the review ends or if an administrator stops the review. -1. If you clicked **Yes**, you may need to include a justification statement in the **Reason** box. +1. If you chose **Yes**, you may need to include a justification statement in the **Reason** box. -1. Click **Submit**. +1. Select **Submit**. You can return to the review if you change your mind and decide to change your response before the end of the review. |
active-directory | Entitlement Management Catalog Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-catalog-create.md | This article shows you how to create and manage a catalog of resources and acces ## Create a catalog -A catalog is a container of resources and access packages. You create a catalog when you want to group related resources and access packages. An administrator can create a catalog. In addition, a user who has been delegated the [catalog creator](entitlement-management-delegate.md) role can create a catalog for resources that they own. A non-administrator who creates the catalog becomes the first catalog owner. A catalog owner can add more users, groups of users, or application service principals as catalog owners. +A catalog is a container of resources and access packages. You create a catalog when you want to group related resources and access packages. An administrator can create a catalog. In addition, a user who has been delegated the [catalog creator](entitlement-management-delegate.md) role can create a catalog for resources that they own. A nonadministrator who creates the catalog becomes the first catalog owner. A catalog owner can add more users, groups of users, or application service principals as catalog owners. **Prerequisite roles:** Global administrator, Identity Governance administrator, User administrator, or Catalog creator To create a catalog: 1. Enter a unique name for the catalog and provide a description. - Users will see this information in an access package's details. + Users see this information in an access package's details. 1. If you want the access packages in this catalog to be available for users to request as soon as they're created, set **Enabled** to **Yes**. To assign a user to the catalog owner role: 1. Select **Add owners** to select the members for these roles. -1. Click **Select** to add these members. +1. Select **Select** to add these members. ## Edit a catalog |
active-directory | Entitlement Management Delegate Catalog | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-delegate-catalog.md | Follow these steps to assign a user to the catalog creator role. **Prerequisite role:** Global administrator, Identity Governance administrator or User administrator -1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**. +1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. -1. In the left menu, in the **Entitlement management** section, click **Settings**. +1. In the left menu, in the **Entitlement management** section, select **Settings**. -1. Click **Edit**. +1. Select **Edit**.  -1. In the **Delegate entitlement management** section, click **Add catalog creators** to select the users or groups that you want to delegate this entitlement management role to. +1. In the **Delegate entitlement management** section, select **Add catalog creators** to select the users or groups that you want to delegate this entitlement management role to. -1. Click **Select**. +1. Select **Select**. -1. Click **Save**. +1. Select **Save**. ## Allow delegated roles to access the Azure portal To allow delegated roles, such as catalog creators and access package managers, **Prerequisite role:** Global administrator or User administrator -1. In the Azure portal, click **Azure Active Directory** and then click **Users**. +1. In the Azure portal, select **Azure Active Directory** and then select **Users**. -1. In the left menu, click **User settings**. +1. In the left menu, select **User settings**. 1. Make sure **Restrict access to Azure AD administration portal** is set to **No**. To allow delegated roles, such as catalog creators and access package managers, You can also view and update catalog creators and entitlement management catalog-specific role assignments using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission can call the Graph API to [list the role definitions](/graph/api/rbacapplication-list-roledefinitions) of entitlement management, and [list role assignments](/graph/api/rbacapplication-list-roleassignments) to those role definitions. -To retrieve a list of the users and groups assigned to the catalog creators role, the role with definition id `ba92d953-d8e0-4e39-a797-0cbedb0a89e8`, use the Graph query +To retrieve a list of the users and groups assigned to the catalog creators role, the role with definition ID `ba92d953-d8e0-4e39-a797-0cbedb0a89e8`, use the Graph query ```http GET https://graph.microsoft.com/beta/roleManagement/entitlementManagement/roleAssignments?$filter=roleDefinitionId eq 'ba92d953-d8e0-4e39-a797-0cbedb0a89e8'&$expand=principal |
active-directory | Entitlement Management Delegate Managers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-delegate-managers.md | -To delegate the creation and management of access packages in a catalog, you add users to the access package manager role. Access package managers must be familiar with the need for users to request access to resources in a catalog. For example, if a catalog is used for a project, then a project lead might be an access package manager for that catalog. Access package managers cannot add resources to a catalog, but they can manage the access packages and policies in a catalog. When delegating to an access package manager, that person can then be responsible for: +To delegate the creation and management of access packages in a catalog, you add users to the access package manager role. Access package managers must be familiar with the need for users to request access to resources in a catalog. For example, if a catalog is used for a project, then a project lead might be an access package manager for that catalog. Access package managers can't add resources to a catalog, but they can manage the access packages and policies in a catalog. When delegating to an access package manager, that person can then be responsible for: -- What roles a user will have to the resources in a catalog+- What roles a user has to the resources in a catalog - Who will need access - Who needs to approve the access requests-- How long the project will last+- How long the project lasts This video provides an overview of how to delegate access governance from catalog owner to access package manager. Follow these steps to assign a user to the access package manager role: **Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, or Catalog owner -1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**. +1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. -1. In the left menu, click **Catalogs** and then open the catalog you want to add administrators to. +1. In the left menu, select **Catalogs** and then open the catalog you want to add administrators to. -1. In the left menu, click **Roles and administrators**. +1. In the left menu, select **Roles and administrators**.  -1. Click **Add access package managers** to select the members for these roles. +1. Select **Add access package managers** to select the members for these roles. -1. Click **Select** to add these members. +1. Select **Select** to add these members. ## Remove an access package manager Follow these steps to remove a user from the access package manager role: **Prerequisite role:** Global administrator, User administrator, or Catalog owner -1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**. +1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. -1. In the left menu, click **Catalogs** and then open the catalog you want to add administrators to. +1. In the left menu, select **Catalogs** and then open the catalog you want to add administrators to. -1. In the left menu, click **Roles and administrators**. +1. In the left menu, select **Roles and administrators**. 1. Add a checkmark next to an access package manager you want to remove. -1. Click **Remove**. +1. Select **Remove**. ## Next steps |
active-directory | Entitlement Management Delegate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-delegate.md | To understand how you might delegate access governance in entitlement management  -As the IT administrator, Hana has contacts in each department-- Mamta in Marketing, Mark in Finance, and Joe in Legal who are responsible for their department's resources and business critical content. +As the IT administrator, Hana has contacts in each department--Mamta in Marketing, Mark in Finance, and Joe in Legal who are responsible for their department's resources and business critical content. With entitlement management, you can delegate access governance to these non-administrators because they're the ones who know which users need access, for how long, and to which resources. Delegating to non-administrators ensures the right people are managing access for their departments. -Here is one way that Hana could delegate access governance to the marketing, finance, and legal departments. +Here's one way that Hana could delegate access governance to the marketing, finance, and legal departments. 1. Hana creates a new Azure AD security group, and adds Mamta, Mark, and Joe as members of the group. After delegation, the marketing department might have roles similar to the follo ## Entitlement management roles -Entitlement management has the following roles, with permissions for administering entitlement management itself, that apply across all catalogs. +Entitlement management has the following roles, with permissions for administering entitlement management itself, that applies across all catalogs. | Entitlement management role | Role definition ID | Description | | | | -- | For example, to view the entitlement management-specific roles that a particular GET https://graph.microsoft.com/beta/roleManagement/entitlementManagement/roleAssignments?$filter=principalId eq '10850a21-5283-41a6-9df3-3d90051dd111'&$expand=roleDefinition&$select=id,appScopeId,roleDefinition ``` -For a role that is specific to a catalog, the `appScopeId` in the response indicates the catalog in which the user is assigned a role. Note that this response only retrieves explicit assignments of that principal to role in entitlement management, it does not return results for a user who has access rights via a directory role, or through membership in a group assigned to a role. +For a role that is specific to a catalog, the `appScopeId` in the response indicates the catalog in which the user is assigned a role. This response only retrieves explicit assignments of that principal to role in entitlement management, it doesn't return results for a user who has access rights via a directory role, or through membership in a group assigned to a role. ## Next steps |
active-directory | Entitlement Management Logic Apps Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-logic-apps-integration.md | |
active-directory | Entitlement Management Logs And Reporting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-logs-and-reporting.md | Archiving Azure AD audit logs requires you to have Azure Monitor in an Azure sub 1. Sign in to the Azure portal as a user who is a Global Administrator. Make sure you have access to the resource group containing the Azure Monitor workspace. -1. Select **Azure Active Directory** then click **Diagnostic settings** under Monitoring in the left navigation menu. Check if there's already a setting to send the audit logs to that workspace. +1. Select **Azure Active Directory** then select **Diagnostic settings** under Monitoring in the left navigation menu. Check if there's already a setting to send the audit logs to that workspace. -1. If there isn't already a setting, click **Add diagnostic setting**. Use the instructions in [Integrate Azure AD logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md#send-logs-to-azure-monitor) to send the Azure AD audit log to the Azure Monitor workspace. +1. If there isn't already a setting, select **Add diagnostic setting**. Use the instructions in [Integrate Azure AD logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md#send-logs-to-azure-monitor) to send the Azure AD audit log to the Azure Monitor workspace.  1. After the log is sent to Azure Monitor, select **Log Analytics workspaces**, and select the workspace that contains the Azure AD audit logs. -1. Select **Usage and estimated costs** and click **Data Retention**. Change the slider to the number of days you want to keep the data to meet your auditing requirements. +1. Select **Usage and estimated costs** and select **Data Retention**. Change the slider to the number of days you want to keep the data to meet your auditing requirements.  1. Later, to see the range of dates held in your workspace, you can use the *Archived Log Date Range* workbook: - 1. Select **Azure Active Directory** then click **Workbooks**. + 1. Select **Azure Active Directory** then select **Workbooks**. - 1. Expand the section **Azure Active Directory Troubleshooting**, and click on **Archived Log Date Range**. + 1. Expand the section **Azure Active Directory Troubleshooting**, and select on **Archived Log Date Range**. ## View events for an access package To view events for an access package, you must have access to the underlying Azu Use the following procedure to view events: -1. In the Azure portal, select **Azure Active Directory** then click **Workbooks**. If you only have one subscription, move on to step 3. +1. In the Azure portal, select **Azure Active Directory** then select **Workbooks**. If you only have one subscription, move on to step 3. 1. If you have multiple subscriptions, select the subscription that contains the workspace. Use the following procedure to view events: Each row includes the time, access package ID, the name of the operation, the object ID, UPN, and the display name of the user who started the operation. Additional details are included in JSON. -1. If you would like to see if there have been changes to application role assignments for an application that were not due to access package assignments, such as by a global administrator directly assigning a user to an application role, then you can select the workbook named *Application role assignment activity*. +1. If you would like to see if there have been changes to application role assignments for an application that weren't due to access package assignments, such as by a global administrator directly assigning a user to an application role, then you can select the workbook named *Application role assignment activity*.  ## Create custom Azure Monitor queries using the Azure portal You can create your own queries on Azure AD audit events, including entitlement management events. -1. In Azure Active Directory of the Azure portal, click **Logs** under the Monitoring section in the left navigation menu to create a new query page. +1. In Azure Active Directory of the Azure portal, select **Logs** under the Monitoring section in the left navigation menu to create a new query page. -1. Your workspace should be shown in the upper left of the query page. If you have multiple Azure Monitor workspaces, and the workspace you're using to store Azure AD audit events isn't shown, click **Select Scope**. Then, select the correct subscription and workspace. +1. Your workspace should be shown in the upper left of the query page. If you have multiple Azure Monitor workspaces, and the workspace you're using to store Azure AD audit events isn't shown, select **Select Scope**. Then, select the correct subscription and workspace. 1. Next, in the query text area, delete the string "search *" and replace it with the following query: You can create your own queries on Azure AD audit events, including entitlement AuditLogs | where Category == "EntitlementManagement" ``` -1. Then click **Run**. +1. Then select **Run**.  -The table will show the Audit log events for entitlement management from the last hour by default. You can change the "Time range" setting to view older events. However, changing this setting will only show events that occurred after Azure AD was configured to send events to Azure Monitor. +The table shows the Audit log events for entitlement management from the last hour by default. You can change the "Time range" setting to view older events. However, changing this setting will only show events that occurred after Azure AD was configured to send events to Azure Monitor. If you would like to know the oldest and newest audit events held in Azure Monitor, use the following query: To set the role assignment and create a query, do the following steps: 1. Select **Access Control (IAM)**. -1. Then click **Add** to add a role assignment. +1. Then select **Add** to add a role assignment.  $wks = Get-AzOperationalInsightsWorkspace ### Retrieve Log Analytics ID with multiple Azure subscriptions - [Get-AzOperationalInsightsWorkspace](/powershell/module/Az.OperationalInsights/Get-AzOperationalInsightsWorkspace) operates in one subscription at a time. So, if you have multiple Azure subscriptions, you'll want to make sure you connect to the one that has the Log Analytics workspace with the Azure AD logs. + [Get-AzOperationalInsightsWorkspace](/powershell/module/Az.OperationalInsights/Get-AzOperationalInsightsWorkspace) operates in one subscription at a time. So, if you have multiple Azure subscriptions, you want to make sure you connect to the one that has the Log Analytics workspace with the Azure AD logs. The following cmdlets display a list of subscriptions, and find the ID of the subscription that has the Log Analytics workspace: |
active-directory | Entitlement Management Onboard External User | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-onboard-external-user.md | |
active-directory | Entitlement Management Organization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-organization.md | |
active-directory | Entitlement Management Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-overview.md | |
active-directory | Entitlement Management Process | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-process.md | A user that needs access to an access package can submit an access request. Depe | | | | Submitted | User submits a request. | | Pending approval | If the policy for an access package requires approval, a request moves to pending approval. |-| Expired | If no approvers approve a request within the approval request timeout, the request expires. To try again, the user will have to resubmit their request. | +| Expired | If no approvers approve a request within the approval request timeout, the request expires. To try again, the user has to resubmit their request. | | Denied | Approver denies a request. | | Approved | Approver approves a request. | | Delivering | User has **not** been assigned access to all the resources in the access package. If this is an external user, the user may not have accessed the resource directory yet. They also may not have accepted the consent prompt. | | Delivered | User has been assigned access to all the resources in the access package. | | Access extended | If extensions are allowed in the policy, the user extended the assignment. |-| Access expired | User's access to the access package has expired. To get access again, the user will have to submit a request. | +| Access expired | User's access to the access package has expired. To get access again, the user has to submit a request. | ## Email notifications The following diagram shows the experience of stage-1 and stage-2 approvers and :::image type="content" source="./media/entitlement-management-process/2stage-approval-with-request-timeout-flow.png" alt-text="2-stage approval process flow" lightbox="./media/entitlement-management-process/2stage-approval-with-request-timeout-flow.png"::: ### Email notifications table-The following table provides more detail about each of these email notifications. To manage these emails, you can use rules. For example, in Outlook, you can create rules to move the emails to a folder if the subject contains words from this table. Note that the words will be based on the default language settings of the tenant where the user is requesting access. +The following table provides more detail about each of these email notifications. To manage these emails, you can use rules. For example, in Outlook, you can create rules to move the emails to a folder if the subject contains words from this table. The words are based on the default language settings of the tenant where the user is requesting access. | # | Email subject | When sent | Sent to | | | | | | | 1 | Action required: Approve or deny forwarded request by *[date]* | This email will be sent to Stage-1 alternate approvers (after the request has been escalated) to take action. | Stage-1 alternate approvers |-| 2 | Action required: Approve or deny request by *[date]* | This email will be sent to the first approver, if escalation is disabled, to take action. | First approver | -| 3 | Reminder: Approve or deny the request by *[date]* for *[requestor]* | This reminder email will be sent to the first approver, if escalation is disabled. The email asks them to take action if they haven't. | First approver | -| 4 | Approve or deny the request by *[time]* on *[date]* | This email will be sent to the first approver (if escalation is enabled) to take action. | First approver | -| 5 | Action required reminder: Approve or deny the request by *[date]* for *[requestor]* | This reminder email will be sent to the first approver, if escalation is enabled. The email asks them to take action if they haven't. | First approver | +| 2 | Action required: Approve or deny request by *[date]* | This email is sent to the first approver, if escalation is disabled, to take action. | First approver | +| 3 | Reminder: Approve or deny the request by *[date]* for *[requestor]* | This reminder email is sent to the first approver, if escalation is disabled. The email asks them to take action if they haven't. | First approver | +| 4 | Approve or deny the request by *[time]* on *[date]* | This email is sent to the first approver (if escalation is enabled) to take action. | First approver | +| 5 | Action required reminder: Approve or deny the request by *[date]* for *[requestor]* | This reminder email is sent to the first approver, if escalation is enabled. The email asks them to take action if they haven't. | First approver | | 6 | Request has expired for *[access_package]* | This email will be sent to the first approver and stage-1 alternate approvers after the request has expired. | First approver, stage-1 alternate approvers |-| 7 | Request approved for *[requestor]* to *[access_package]* | This email will be sent to the first approver and stage-1 alternate approvers upon request completion. | First approver, stage-1 alternate approvers | -| 8 | Request approved for *[requestor]* to *[access_package]* | This email will be sent to the first approver and stage-1 alternate approvers of a multi-stage request when the stage-1 request is approved. | First approver, stage-1 alternate approvers | -| 9 | Request denied to *[access_package]* | This email will be sent to the requestor when their request is denied | Requestor | -| 10 | Your request has expired for *[access_package]* | This email will be sent to the requestor at the end of a single or multi-stage request. The email notifies the requestor that the request expired. | Requestor | -| 11 | Action required: Approve or deny request by *[date]* | This email will be sent to the second approver, if escalation is disabled, to take action. | Second approver | -| 12 | Action required reminder: Approve or deny the request by *[date]* | This reminder email will be sent to the second approver, if escalation is disabled. The notification asks them to take action if they haven't yet. | Second approver | -| 13 | Action required: Approve or deny the request by *[date]* for *[requestor]* | This email will be sent to second approver, if escalation is enabled, to take action. | Second approver | -| 14 | Action required reminder: Approve or deny the request by *[date]* for *[requestor]* | This reminder email will be sent to the second approver, if escalation is enabled. The notification asks them to take action if they haven't yet. | Second approver | -| 15 | Action required: Approve or deny forwarded request by *[date]* | This email will be sent to stage-2 alternate approvers, if escalation is enabled, to take action. | Stage-2 alternate approvers | -| 16 | Request approved for *[requestor]* to *[access_package]* | This email will be sent to the second approver and stage-2 alternate approvers upon approving the request. | Second approver, Stage-2 alternate approvers | +| 7 | Request approved for *[requestor]* to *[access_package]* | This email is sent to the first approver and stage-1 alternate approvers upon request completion. | First approver, stage-1 alternate approvers | +| 8 | Request approved for *[requestor]* to *[access_package]* | This email is sent to the first approver and stage-1 alternate approvers of a multi-stage request when the stage-1 request is approved. | First approver, stage-1 alternate approvers | +| 9 | Request denied to *[access_package]* | This email is sent to the requestor when their request is denied | Requestor | +| 10 | Your request has expired for *[access_package]* | This email is sent to the requestor at the end of a single or multi-stage request. The email notifies the requestor that the request expired. | Requestor | +| 11 | Action required: Approve or deny request by *[date]* | This email is sent to the second approver, if escalation is disabled, to take action. | Second approver | +| 12 | Action required reminder: Approve or deny the request by *[date]* | This reminder email is sent to the second approver, if escalation is disabled. The notification asks them to take action if they haven't yet. | Second approver | +| 13 | Action required: Approve or deny the request by *[date]* for *[requestor]* | This email is sent to second approver, if escalation is enabled, to take action. | Second approver | +| 14 | Action required reminder: Approve or deny the request by *[date]* for *[requestor]* | This reminder email is sent to the second approver, if escalation is enabled. The notification asks them to take action if they haven't yet. | Second approver | +| 15 | Action required: Approve or deny forwarded request by *[date]* | This email is sent to stage-2 alternate approvers, if escalation is enabled, to take action. | Stage-2 alternate approvers | +| 16 | Request approved for *[requestor]* to *[access_package]* | This email is sent to the second approver and stage-2 alternate approvers upon approving the request. | Second approver, Stage-2 alternate approvers | | 17 | A request has expired for *[access_package]* | This email will be sent to the second approver or alternate approvers, after the request expires. | Second approver, stage-2 alternate approvers |-| 18 | You now have access to *[access_package]* | This email will be sent to the end users to start using their access. | Requestor | -| 19 | Extend access for *[access_package]* by *[date]* | This email will be sent to the end users before their access expires. | Requestor | +| 18 | You now have access to *[access_package]* | This email is sent to the end users to start using their access. | Requestor | +| 19 | Extend access for *[access_package]* by *[date]* | This email is sent to the end users before their access expires. | Requestor | | 20 | Access has ended for *[access_package]* | This email will be sent to the end users after their access expires. | Requestor | ### Access request emails -When a requestor submits an access request for an access package configured to require approval, all approvers added to the policy will receive an email notification with details of the request. The details in the email include: requestor's name organization, and business justification; and the requested access start and end date (if provided). The details will also include when the request was submitted and when the request will expire. +When a requestor submits an access request for an access package configured to require approval, all approvers added to the policy receives an email notification with details of the request. The details in the email include: requestor's name organization, and business justification; and the requested access start and end date (if provided). The details will also include when the request was submitted and when the request will expire. -The email includes a link approvers can click on to go to My Access to approve or deny the access request. Here is a sample email notification that is sent to an approver to complete an access request: +The email includes a link approvers can select on to go to My Access to approve or deny the access request. Here's a sample email notification that is sent to an approver to complete an access request:  -Approvers can also receive a reminder email. The email asks the approver to make a decision on the request. Here is a sample email notification the approver receives to remind them to take action: +Approvers can also receive a reminder email. The email asks the approver to make a decision on the request. Here's a sample email notification the approver receives to remind them to take action:  ### Alternate approvers request emails -If the alternate approvers setting is enabled and the request is still pending, it will be forwarded. Alternate approvers will receive an email to approve or deny the request. You can enable alternate approvers in stage-1 and stage-2. Here is a sample email of the notification the alternate approvers receive: +If the alternate approvers setting is enabled and the request is still pending, it's forwarded. Alternate approvers receive an email to approve or deny the request. You can enable alternate approvers in stage-1 and stage-2. Here's a sample email of the notification the alternate approvers receive:  Both the approver and the alternate approvers can approve or deny the request. ### Approved or denied emails - When an approver receives an access request submitted by a requestor, they can approve or deny the access request. The approver needs to add a business justification for their decision. Here is a sample email sent to the approvers + When an approver receives an access request submitted by a requestor, they can approve or deny the access request. The approver needs to add a business justification for their decision. Here's a sample email sent to the approvers and alternate approvers after a request is approved:  -When an access request is approved, and their access is provisioned, an email notification is sent to the requestor that they now have access to the access package. Here is a sample email notification that is sent to a requestor when they're granted access to an access package: +When an access request is approved, and their access is provisioned, an email notification is sent to the requestor that they now have access to the access package. Here's a sample email notification that is sent to a requestor when they're granted access to an access package:  -When an access request is denied, an email notification is sent to the requestor. Here is a sample email notification that is sent to a requestor when their access request is denied: +When an access request is denied, an email notification is sent to the requestor. Here's a sample email notification that is sent to a requestor when their access request is denied:  ### Multi-stage approval access request emails -If multi-stage approval is enabled, at least one approvers from each stage must approve the request, before the requestor can receive access. +If multi-stage approval is enabled, at least one approver from each stage must approve the request, before the requestor can receive access. -During stage-1, the first approver will receive the access request email and make a decision. +During stage-1, the first approver receives the access request email and makes a decision. -After the first or alternate approvers approve the request in stage-1, stage-2 begins. During stage-2, the second approver will receive the access request notification email. After the second approver or alternate approvers in stage-2 (if escalation is enabled) decide to approve or deny the request, notification emails are sent to the first and second approvers, and all alternate approvers in stage-1 and stage-2, as well as the requestor. +After the first or alternate approvers approve the request in stage-1, stage-2 begins. During stage-2, the second approver receives the access request notification email. After the second approver or alternate approvers in stage-2 (if escalation is enabled) decide to approve or deny the request, notification emails are sent to the first and second approvers, and all alternate approvers in stage-1 and stage-2, as well as the requestor. ### Expired access request emails An email notification is sent to the requestor, notifying them that their access :::image type="content" source="./media/entitlement-management-process/requestor-expiration-request-flow.png" alt-text="Requestor extend access process flow" lightbox="./media/entitlement-management-process/requestor-expiration-request-flow.png"::: -Here is a sample email notification that is sent to a requestor when their access request has expired: +Here's a sample email notification that is sent to a requestor when their access request has expired:  |
active-directory | Entitlement Management Reports | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-reports.md | -The entitlement management reports and Azure AD audit log provide additional details about what resources users have access to. As an administrator, you can view the access packages and resource assignments for a user and view request logs for auditing purposes or to determine the status of a user's request. This article describes how to use the entitlement management reports and Azure AD audit logs. +The entitlement management reports and Azure AD audit log provide additional details about what resources users have access to. As an administrator, you can view the access packages and resource assignments for a user and view request logs for auditing purposes or determining the status of a user's request. This article describes how to use the entitlement management reports and Azure AD audit logs. Watch the following video to learn how to view what resources users have access to in entitlement management: This report enables you to list all of the access packages a user can request an **Prerequisite role:** Global administrator, Identity Governance administrator or User administrator -1. Click **Azure Active Directory** and then click **Identity Governance**. +1. Select **Azure Active Directory** and then select **Identity Governance**. -1. In the left menu, click **Reports**. +1. In the left menu, select **Reports**. -1. Click **Access packages for a user**. +1. Select **Access packages for a user**. -1. Click **Select users** to open the Select users pane. +1. Select **Select users** to open the Select users pane. -1. Find the user in the list and then click **Select**. +1. Find the user in the list and then select **Select**. The **Can request** tab displays a list of the access packages the user can request. This list is determined by the [request policies](entitlement-management-access-package-request-policy.md#for-users-in-your-directory) defined for the access packages.  -1. If there are more than one resource roles or policies for an access package, click the resource roles or policies entry to see selection details. +1. If there are more than one resource roles or policies for an access package, select the resource roles or policies entry to see selection details. -1. Click the **Assigned** tab to see a list of the access packages currently assigned to the user. When an access package is assigned to a user, it means that the user has access to all of the resource roles in the access package. +1. Select the **Assigned** tab to see a list of the access packages currently assigned to the user. When an access package is assigned to a user, it means that the user has access to all of the resource roles in the access package. ## View resource assignments for a user -This report enables you to list the resources currently assigned to a user in entitlement management. Note that this report is for resources managed with entitlement management. The user might have access to other resources in your directory outside of entitlement management. +This report enables you to list the resources currently assigned to a user in entitlement management. This report is for resources managed with entitlement management. The user might have access to other resources in your directory outside of entitlement management. **Prerequisite role:** Global administrator, Identity Governance administrator or User administrator -1. Click **Azure Active Directory** and then click **Identity Governance**. +1. Select **Azure Active Directory** and then select **Identity Governance**. -1. In the left menu, click **Reports**. +1. In the left menu, select **Reports**. -1. Click **Resource assignments for a user**. +1. Select **Resource assignments for a user**. -1. Click **Select users** to open the Select users pane. +1. Select **Select users** to open the Select users pane. -1. Find the user in the list and then click **Select**. +1. Find the user in the list and then select **Select**. A list of the resources currently assigned to the user is displayed. The list also shows the access package and policy they got the resource role from, along with start and end date for access. - If a user got access to the same resource in two or more packages, you can click an arrow to see each package and policy. + If a user got access to the same resource in two or more packages, you can select an arrow to see each package and policy.  This report enables you to list the resources currently assigned to a user in en To get additional details on how a user requested and received access to an access package, you can use the Azure AD audit log. In particular, you can use the log records in the `EntitlementManagement` and `UserManagement` categories to get additional details on the processing steps for each request. -1. Click **Azure Active Directory** and then click **Audit logs**. +1. Select **Azure Active Directory** and then select **Audit logs**. 1. At the top, change the **Category** to either `EntitlementManagement` or `UserManagement`, depending on the audit record you're looking for. -1. Click **Apply**. +1. Select **Apply**. -1. To download the logs, click **Download**. +1. To download the logs, select **Download**. When Azure AD receives a new request, it writes an audit record, in which the **Category** is `EntitlementManagement` and the **Activity** is typically `User requests access package assignment`. In the case of a direct assignment created in the Azure portal, the **Activity** field of the audit record is `Administrator directly assigns user to access package`, and the user performing the assignment is identified by the **ActorUserPrincipalName**. -Azure AD will write additional audit records while the request is in progress, including: +Azure AD writes additional audit records while the request is in progress, including: | Category | Activity | Request status | | :- | : | : |-| `EntitlementManagement` | `Auto approve access package assignment request` | Request does not require approval | +| `EntitlementManagement` | `Auto approve access package assignment request` | Request doesn't require approval | | `UserManagement` | `Create request approval` | Request requires approval | | `UserManagement` | `Add approver to request approval` | Request requires approval | | `EntitlementManagement` | `Approve access package assignment request` | Request approved |-| `EntitlementManagement` | `Ready to fulfill access package assignment request` |Request approved, or does not require approval | +| `EntitlementManagement` | `Ready to fulfill access package assignment request` |Request approved, or doesn't require approval | When a user is assigned access, Azure AD writes an audit record for the `EntitlementManagement` category with **Activity** `Fulfill access package assignment`. The user who received the access is identified by **ActorUserPrincipalName** field. -If access was not assigned, then Azure AD writes an audit record for the `EntitlementManagement` category with **Activity** either `Deny access package assignment request`, if the request was denied by an approver, or `Access package assignment request timed out (no approver action taken)`, if the request timed out before an approver could approve. +If access wasn't assigned, then Azure AD writes an audit record for the `EntitlementManagement` category with **Activity** either `Deny access package assignment request`, if the request was denied by an approver, or `Access package assignment request timed out (no approver action taken)`, if the request timed out before an approver could approve. When the user's access package assignment expires, is canceled by the user, or removed by an administrator, then Azure AD writes an audit record for the `EntitlementManagement` category with **Activity** of `Remove access package assignment`. |
active-directory | Entitlement Management Reprocess Access Package Assignments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-reprocess-access-package-assignments.md | As an access package manager, you can automatically reevaluate and enforce users For example, a user may have been removed from a group manually, thereby causing that user to lose access to necessary resources. -Entitlement Management does not block outside updates to the access packageΓÇÖs resources, so the Entitlement Management UI would not accurately display this change. Therefore, the userΓÇÖs assignment status would be shown as ΓÇ£DeliveredΓÇ¥ even though the user does not have access to the resources anymore. However, if the userΓÇÖs assignment is reprocessed, they will be added to the access packageΓÇÖs resources again. Reprocessing ensures that the access package assignments are up to date, that users have access to necessary resources, and that assignments are accurately reflected in the UI. +Entitlement Management doesn't block outside updates to the access packageΓÇÖs resources, so the Entitlement Management UI wouldn't accurately display this change. Therefore, the userΓÇÖs assignment status would be shown as ΓÇ£DeliveredΓÇ¥ even though the user doesn't have access to the resources anymore. However, if the userΓÇÖs assignment is reprocessed, they'll be added to the access packageΓÇÖs resources again. Reprocessing ensures that the access package assignments are up to date, that users have access to necessary resources, and that assignments are accurately reflected in the UI. This article describes how to reprocess assignments in an existing access package. To use entitlement management and assign users to access packages, you must have **Prerequisite role**: Global administrator, Identity Governance administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager -If you have users who are in the "Delivered" state but do not have access to resources that are a part of the access package, you will likely need to reprocess the assignments to reassign those users to the access package's resources. Follow these steps to reprocess assignments for an existing access package: +If you have users who are in the "Delivered" state but don't have access to resources that are a part of the access package, you'll likely need to reprocess the assignments to reassign those users to the access package's resources. Follow these steps to reprocess assignments for an existing access package: 1. Sign in to the [Azure portal](https://portal.azure.com). -1. Click **Azure Active Directory**, and then click **Identity Governance**. +1. Select **Azure Active Directory**, and then select **Identity Governance**. -1. In the left menu, click **Access packages** and then open the access package with the user assignment you want to reprocess. +1. In the left menu, select **Access packages** and then open the access package with the user assignment you want to reprocess. -1. Underneath **Manage** on the left side, click **Assignments**. +1. Underneath **Manage** on the left side, select **Assignments**.  1. Select all users whose assignments you wish to reprocess. -1. Click **Reprocess**. +1. Select **Reprocess**. ## Next steps |
active-directory | Entitlement Management Reprocess Access Package Requests | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-reprocess-access-package-requests.md | |
active-directory | Entitlement Management Request Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-request-access.md | |
active-directory | Entitlement Management Request Approve | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-request-approve.md | |
active-directory | Entitlement Management Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-scenarios.md | |
active-directory | Entitlement Management Ticketed Provisioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-ticketed-provisioning.md | -Scenario: In this scenario you learn how to use custom extensibility, and a Logic App, to automatically generate ServiceNow ticket for provisioning for manual provisioning of users who have received assignments and need access to apps. +Scenario: In this scenario you learn how to use custom extensibility, and a Logic App, to automatically generate ServiceNow tickets for manual provisioning of users who have received assignments and need access to apps. In this tutorial, you learn how to: |
active-directory | Entitlement Management Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-troubleshoot.md | |
active-directory | Entitlement Management Verified Id Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-verified-id-settings.md | |
active-directory | How To Lifecycle Workflow Sync Attributes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/how-to-lifecycle-workflow-sync-attributes.md | The following example will walk you through setting up a custom synchronization > [!NOTE] >- **msDS-cloudExtensionAttribute1** is an example source. >- **Starting with [Azure AD Connect 2.0.3.0](../hybrid/reference-connect-version-history.md#functional-changes-10), `employeeHireDate` is added to the default 'Out to Azure AD' rule, so steps 10-16 are not required.**+>- **Starting with [Azure AD Connect 2.1.19.0](../hybrid/reference-connect-version-history.md#functional-changes-1), `employeeLeaveDateTime` is added to the default 'Out to Azure AD' rule, so steps 10-16 aren't required.** For more information, see [How to customize a synchronization rule](../hybrid/how-to-connect-create-custom-sync-rule.md) and [Make a change to the default configuration.](../hybrid/how-to-connect-sync-change-the-configuration.md) +## How to verify these attribute values in Azure AD +To review the values set on these properties on user objects in Azure AD, you can use the [Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/installation?view=graph-powershell-1.0&preserve-view=true). For example: ++```PowerShell +# Import Module +Import-Module Microsoft.Graph.Users ++# Define the necessary scopes +$Scopes =@("User.Read.All", "User-LifeCycleInfo.Read.All") ++# Connect using the scopes defined and select the Beta API Version +Connect-MgGraph -Scopes $Scopes +Select-MgProfile -Name beta ++# Query a user, using its user ID, and return the desired properties +Get-MgUser -UserId "44198096-38ea-440d-9497-bb6b06bcaf9b" | Select-Object DisplayName, EmployeeLeaveDateTime +``` + ## Next steps - [What are lifecycle workflows?](what-are-lifecycle-workflows.md) |
active-directory | Identity Governance Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-overview.md | |
active-directory | Lifecycle Workflow Audits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-audits.md | |
active-directory | Lifecycle Workflow Extensibility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-extensibility.md | Lifecycle Workflows allow you to create workflows that can be triggered based on ## Logic Apps prerequisites -To link a Azure Logic App with a custom task extension, the following prerequisites must be available: +To link an Azure Logic App with a custom task extension, the following prerequisites must be available: - An Azure subscription - A resource group When you create a custom task extension that waits for a response from the Logic The response can be authorized in one of the following ways: - **System-assigned managed identity (Default)** - With this choice you enable and utilize the Logic Apps system-assigned managed identity. For more information, see: [Authenticate access to Azure resources with managed identities in Azure Logic Apps](/azure/logic-apps/create-managed-service-identity)-- **No authorization** - With this choice no authorization will be granted, and you separately have to assign an application permission (LifecycleWorkflows.ReadWrite.All), or role assignment (Lifecycle Workflows Administrator). If an application is responding we do not recommend this option, as it is not following the principle of least privilege. This option may also be used if responses are only provided on behalf of a user (LifecycleWorkflows.ReadWrite.All delegated permission AND Lifecycle Workflows Administrator role assignment)-- **Existing application** - With this choice you're able to choose an existing application to respond. This can be a regular application as well as a system or user-assigned managed identity. For more information on managed identity types, see: [Managed identity types](../managed-identities-azure-resources/overview.md#managed-identity-types).+- **No authorization** - With this choice no authorization will be granted, and you separately have to assign an application permission (LifecycleWorkflows.ReadWrite.All), or role assignment (Lifecycle Workflows Administrator). If an application is responding we don't recommend this option, as it isn't following the principle of least privilege. This option may also be used if responses are only provided on behalf of a user (LifecycleWorkflows.ReadWrite.All delegated permission AND Lifecycle Workflows Administrator role assignment) +- **Existing application** - With this choice you're able to choose an existing application to respond. This can be a regular application and a system or user-assigned managed identity. For more information on managed identity types, see: [Managed identity types](../managed-identities-azure-resources/overview.md#managed-identity-types). ## Custom task extension integration with Azure Logic Apps high-level steps |
active-directory | Lifecycle Workflow History | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-history.md | |
active-directory | Lifecycle Workflow Templates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-templates.md | |
active-directory | Lifecycle Workflow Versioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-versioning.md | |
active-directory | Manage Workflow Properties | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-workflow-properties.md | |
active-directory | Manage Workflow Tasks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-workflow-tasks.md | Changing a workflow's tasks or execution conditions requires the creation of a n ## Edit the tasks of a workflow using the Azure portal -Tasks within workflows can be added, edited, reordered, and removed at will. To edit the tasks of a workflow using the Azure portal, you'll complete the following steps: +Tasks within workflows can be added, edited, reordered, and removed at will. To edit the tasks of a workflow using the Azure portal, you complete the following steps: 1. Sign in to the [Azure portal](https://portal.azure.com). Tasks within workflows can be added, edited, reordered, and removed at will. To ## Edit the execution conditions of a workflow using the Azure portal -To edit the execution conditions of a workflow using the Azure portal, you'll do the following steps: +To edit the execution conditions of a workflow using the Azure portal, you do the following steps: 1. On the left menu of Lifecycle Workflows, select **Workflows (Preview)**. To edit the execution conditions of a workflow using the Azure portal, you'll do 1. On the left side of the screen, select **Execution conditions (Preview)**. :::image type="content" source="media/manage-workflow-tasks/execution-conditions-details.png" alt-text="Screenshot of the execution condition details of a workflow." lightbox="media/manage-workflow-tasks/execution-conditions-details.png"::: -1. On this screen you are presented with **Trigger details**. Here we have a trigger type and attribute details. In the template you can edit the attribute details to define when a workflow is run in relation to the attribute value measured in days. This attribute value can be from 0 to 60 days. +1. On this screen, you're presented with **Trigger details**. Here we have a trigger type and attribute details. In the template you can edit the attribute details to define when a workflow is run in relation to the attribute value measured in days. This attribute value can be from 0 to 60 days. 1. Select the **Scope** tab. :::image type="content" source="media/manage-workflow-tasks/execution-conditions-scope.png" alt-text="Screenshot of the execution scope page of a workflow." lightbox="media/manage-workflow-tasks/execution-conditions-scope.png"::: -1. On this screen you can define rules for who the workflow will run. In the template **Scope type** is set as Rule-Based, and you define the rule using expressions on user properties. For more information on supported user properties. see: [supported queries on user properties](/graph/aad-advanced-queries#user-properties). +1. On this screen you can define rules for who the workflow runs. In the template **Scope type** is set as Rule-Based, and you define the rule using expressions on user properties. For more information on supported user properties. see: [supported queries on user properties](/graph/aad-advanced-queries#user-properties). 1. After making changes, select **save** to capture changes to the execution conditions. To edit the execution conditions of a workflow using the Azure portal, you'll do :::image type="content" source="media/manage-workflow-tasks/manage-versions.png" alt-text="Screenshot of versions of a workflow." lightbox="media/manage-workflow-tasks/manage-versions.png"::: -1. On this page you see a list of the workflow versions. +1. On this page, you see a list of the workflow versions. :::image type="content" source="media/manage-workflow-tasks/manage-versions-list.png" alt-text="Screenshot of managing version list of lifecycle workflows." lightbox="media/manage-workflow-tasks/manage-versions-list.png"::: |
active-directory | On Demand Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/on-demand-workflow.md | |
active-directory | Trigger Custom Task | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/trigger-custom-task.md | |
active-directory | What Are Lifecycle Workflows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/what-are-lifecycle-workflows.md | |
active-directory | Workflows Faqs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/workflows-faqs.md | |
active-directory | How To Connect Install Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-install-prerequisites.md | We recommend that you harden your Azure AD Connect server to decrease the securi - Ensure every machine has a unique local administrator password. For more information, see [Local Administrator Password Solution (LAPS)](https://support.microsoft.com/help/3062591/microsoft-security-advisory-local-administrator-password-solution-laps) can configure unique random passwords on each workstation and server store them in Active Directory protected by an ACL. Only eligible authorized users can read or request the reset of these local administrator account passwords. You can obtain the LAPS for use on workstations and servers from the [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=46899). Additional guidance for operating an environment with LAPS and privileged access workstations (PAWs) can be found in [Operational standards based on clean source principle](/windows-server/identity/securing-privileged-access/securing-privileged-access-reference-material#operational-standards-based-on-clean-source-principle). - Implement dedicated [privileged access workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/) for all personnel with privileged access to your organization's information systems. - Follow these [additional guidelines](/windows-server/identity/ad-ds/plan/security-best-practices/reducing-the-active-directory-attack-surface) to reduce the attack surface of your Active Directory environment.-- Follow the [Monitor changes to federation configuration](how-to-connect-monitor-federation-changes.md) to setup alerts to monitor changes to the trust established between your Idp and Azure AD. +- Follow the [Monitor changes to federation configuration](how-to-connect-monitor-federation-changes.md) to set up alerts to monitor changes to the trust established between your Idp and Azure AD. - Enable Multi Factor Authentication (MFA) for all users that have privileged access in Azure AD or in AD. One security issue with using Azure AD Connect is that if an attacker can get control over the Azure AD Connect server they can manipulate users in Azure AD. To prevent an attacker from using these capabilities to take over Azure AD accounts, MFA offers protections so that even if an attacker manages to e.g. reset a user's password using Azure AD Connect they still cannot bypass the second factor. - Disable Soft Matching on your tenant. Soft Matching is a great feature to help transferring source of authority for existing cloud managed objects to Azure AD Connect, but it comes with certain security risks. If you do not require it, you should [disable Soft Matching](how-to-connect-syncservice-features.md#blocksoftmatch). - Disable Hard Match Takeover. Hard match takeover allows Azure AD Connect to take control of a cloud managed object and changing the source of authority for the object to Active Directory. Once the source of authority of an object is taken over by Azure AD Connect, changes made to the Active Directory object that is linked to the Azure AD object will overwrite the original Azure AD data - including the password hash, if Password Hash Sync is enabled. An attacker could use this capability to take over control of cloud managed objects. To mitigate this risk, [disable hard match takeover](/powershell/module/msonline/set-msoldirsyncfeature?view=azureadps-1.0&preserve-view=true#example-3-block-cloud-object-takeover-through-hard-matching-for-the-tenant). We recommend that you harden your Azure AD Connect server to decrease the securi * Azure AD Connect requires network connectivity to all configured domains * Azure AD Connect requires network connectivity to the root domain of all configured forest * If you have firewalls on your intranet and you need to open ports between the Azure AD Connect servers and your domain controllers, see [Azure AD Connect ports](reference-connect-ports.md) for more information.-* If your proxy or firewall limit which URLs can be accessed, the URLs documented in [Office 365 URLs and IP address ranges](/microsoft-365/enterprise/urls-and-ip-address-ranges) must be opened. Also see [Safelist the Azure portal URLs on your firewall or proxy server](../../../azure-portal/azure-portal-safelist-urls.md?tabs=public-cloud). +* If your proxy or firewall limit which URLs can be accessed, the URLs documented in [Office 365 URLs and IP address ranges](https://support.office.com/article/Office-365-URLs-and-IP-address-ranges-8548a211-3fe7-47cb-abb1-355ea5aa88a2) must be opened. Also see [Safelist the Azure portal URLs on your firewall or proxy server](../../../azure-portal/azure-portal-safelist-urls.md). * If you're using the Microsoft cloud in Germany or the Microsoft Azure Government cloud, see [Azure AD Connect sync service instances considerations](reference-connect-instances.md) for URLs. * Azure AD Connect (version 1.1.614.0 and after) by default uses TLS 1.2 for encrypting communication between the sync engine and Azure AD. If TLS 1.2 isn't available on the underlying operating system, Azure AD Connect incrementally falls back to older protocols (TLS 1.1 and TLS 1.0). From Azure AD Connect version 2.0 onwards. TLS 1.0 and 1.1 are no longer supported and installation will fail if TLS 1.2 is not enabled. * Prior to version 1.1.614.0, Azure AD Connect by default uses TLS 1.0 for encrypting communication between the sync engine and Azure AD. To change to TLS 1.2, follow the steps in [Enable TLS 1.2 for Azure AD Connect](#enable-tls-12-for-azure-ad-connect). The minimum requirements for computers running AD FS or Web Application Proxy se * Azure VM: A2 configuration or higher ## Next steps- Learn more about [Integrating your on-premises identities with Azure Active Directory](../whatis-hybrid-identity.md).- |
active-directory | How To Connect Password Hash Synchronization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-password-hash-synchronization.md | The following section describes, in-depth, how password hash synchronization wor > [!NOTE] > The password hash value is **NEVER** stored in SQL. These values are only processed in memory prior to being sent to Azure AD. + ### Security considerations When synchronizing passwords, the plain-text version of your password is not exposed to the password hash synchronization feature, to Azure AD, or any of the associated services. When *EnforceCloudPasswordPolicyForPasswordSyncedUsers* is disabled (which is th `(Get-AzureADUser -objectID <User Object ID>).passwordpolicies` -To enable the EnforceCloudPasswordPolicyForPasswordSyncedUsers feature, run the following command using the MSOnline PowerShell module as shown below. You would have to type yes for the Enable parameter as shown below : +To enable the EnforceCloudPasswordPolicyForPasswordSyncedUsers feature, run the following command using the MSOnline PowerShell module as shown below. You would have to type yes for the Enable parameter as shown below: ``` Set-MsolDirSyncFeature -Feature EnforceCloudPasswordPolicyForPasswordSyncedUsers |
active-directory | Plan Connect Performance Factors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/plan-connect-performance-factors.md | The sync process runtime has the following performance characteristics: * Import time grows linearly with the number of objects being synced. For example, if 10,000 objects take 10 minutes to import, then 20,000 objects will take approximately 20 minutes on the same server. * Export is also linear. * The sync will grow exponentially based on the number of objects with references to other objects. Group memberships and nested groups have the main performance impact, because their members refer to user objects or other groups. These references must be found and referenced to actual objects in the MV to complete the sync cycle.+* Changing a group member will lead to a re-evaluation of all group members. For example, if you have a group with 50K members and you only update 1 member, this will trigger a synchronization of all 50K members. ### Filtering |
active-directory | Reference Connect Health Version History | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-health-version-history.md | For feature feedback, vote at [Connect Health User Voice channel](https://feedba ## 27 March 2023 **Agent Update** -Azure AD Connect Health ADDS and ADFS Health Agents (version 3.2.2256.26) +Azure AD Connect Health ADDS and ADFS Health Agents (version 3.2.2256.26, Download Center Only) - We created a fix for so that the agents would be FIPS compliant - the change was to have the agents use ΓÇÿCloudStorageAccount.UseV1MD5 = falseΓÇÖ so the agent only uses only FIPS compliant cryptography, otherwise azure blob client causes FIPs exceptions to be thrown. |
active-directory | Reference Connect Version History | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-version-history.md | This article helps you keep track of the versions that have been released and un You can upgrade your Azure AD Connect server from all supported versions with the latest versions: -You can download the latest version of Azure AD Connect 2.0 from the [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=47594). See the [release notes for the latest V2.0 release](reference-connect-version-history.md#20280).\ +You can download the latest version of Azure AD Connect 2.0 from the [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=47594). See the [release notes for the latest V2.0 release](reference-connect-version-history.md#21200).\ Get notified about when to revisit this page for updates by copying and pasting this URL: `https://aka.ms/aadconnectrss` into your  feed reader. To read more about autoupgrade, see [Azure AD Connect: Automatic upgrade](how-to ### Bug fixes + - We fixed a bug where the new employeeLeaveDateTime attribute wasn't syncing correctly in version 2.1.19.0. Note that if the incorrect attribute was already used in a rule, then the rule must be updated with the new attribute and any objects in the AAD connector space that have the incorrect attribute must be removed with the "Remove-ADSyncCSObject" cmdlet, and then a full sync cycle must be run. ## 2.1.19.0 To read more about autoupgrade, see [Azure AD Connect: Automatic upgrade](how-to > We have discovered a security vulnerability in the Azure AD Connect Admin Agent. If you have installed the Admin Agent previously it is important that you update your Azure AD Connect server(s) to this version to mitigate the vulnerability. ### Functional changes+ - We have removed the public preview functionality for the Admin Agent from Azure AD Connect. We won't provide this functionality going forward. - We added support for two new attributes: employeeOrgDataCostCenter and employeeOrgDataDivision. - We added CertificateUserIds attribute to AAD Connector static schema. - The AAD Connect wizard will now abort if write event logs permission is missing. To read more about autoupgrade, see [Azure AD Connect: Automatic upgrade](how-to - We made the following Accessibility fixes: - Fixed a bug where Focus is lost during keyboard navigation on Domain and OU Filtering page. - We updated the accessible name of Clear Runs drop down.- - We fixed a bug where the tooltip of the "Help" button is not accessible through keyboard if navigated with arrow keys. + - We fixed a bug where the tooltip of the "Help" button isn't accessible through keyboard if navigated with arrow keys. - We fixed a bug where the underline of hyperlinks was missing on the Welcome page of the wizard.- - We fixed a bug in Sync Service Manager's About dialog where the Screen reader is not announcing the information about the data appearing under the "About" dialog box. - - We fixed a bug where the Management Agent Name was not mentioned in logs when an error occurred while validating MA Name. - - We fixed several accessibility issues with the keyboard navigation and custom control type fixes. The Tooltip of the "help" button is not collapsing by pressing "Esc" key. There was an Illogical keyboard focus on the User Sign In radio buttons and there was an invalid control type on the help popups. + - We fixed a bug in Sync Service Manager's About dialog where the Screen reader isn't announcing the information about the data appearing under the "About" dialog box. + - We fixed a bug where the Management Agent Name wasn't mentioned in logs when an error occurred while validating MA Name. + - We fixed several accessibility issues with the keyboard navigation and custom control type fixes. The Tooltip of the "help" button isn't collapsing by pressing "Esc" key. There was an Illogical keyboard focus on the User Sign In radio buttons and there was an invalid control type on the help popups. - We fixed a bug where an empty label was causing an accessibility error. ## 2.1.1.0 To read more about autoupgrade, see [Azure AD Connect: Automatic upgrade](how-to 3/24/2022: Released for download only, not available for auto upgrade ### Bug fixes+ - Fixed an issue where some sync rule functions weren't parsing surrogate pairs properly. + - Fixed an issue where, under certain circumstances, the sync service wouldn't start due to a model db corruption. You can read more about the model db corruption issue in [this article](/troubleshoot/azure/active-directory/resolve-model-database-corruption-sqllocaldb) ## 2.0.91.0 Under certain circumstances, the installer for this version displays an error th ### Bug fixes -We fixed a bug that occurred when a domain was renamed and Password Hash Sync failed with an error that indicated "a specified cast is not valid" in the Event log. This regression is from earlier builds. +We fixed a bug that occurred when a domain was renamed and Password Hash Sync failed with an error that indicated "a specified cast isn't valid" in the Event log. This regression is from earlier builds. ## 1.6.13.0 We fixed a bug that occurred when a domain was renamed and Password Hash Sync fa ### Bug fixes -We fixed a bug that occurred when a domain was renamed and Password Hash Sync failed with an error that indicated "a specified cast is not valid" in the Event log. This regression is from earlier builds. +We fixed a bug that occurred when a domain was renamed and Password Hash Sync failed with an error that indicated "a specified cast isn't valid" in the Event log. This regression is from earlier builds. ### Functional changes |
active-directory | Decommission Connect Sync V1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/decommission-connect-sync-v1.md | + + Title: 'Decommissioning Azure AD Connect V1' +description: This article describes Azure AD Connect V1 decommissioning and how to migrate to V2. ++documentationcenter: '' +++editor: '' +++ na + Last updated : 05/31/2023+++++++# Decommission Azure AD Connect V1 ++The one-year advanced notice of Azure AD Connect V1's retirement was announced in August 2021. As of August 31, 2022, all V1 versions went out of support and were subject to stop working unexpectedly at any point. ++On **October 1, 2023**, Azure AD cloud services will stop accepting connections from Azure AD Connect V1 servers, and identities will no longer synchronize. ++If you are still using Azure AD Connect V1 you must take action immediately. ++>[!IMPORTANT] +>Azure AD Connect V1 will stop working on October 1st 2023. You need to migrate to cloud sync or connect sync V2. ++## Migrate to cloud sync +Before moving to Azure AD Connect V2, you should see if cloud sync is right for you instead. Cloud sync uses a light-weight provisioning agent and is fully configurable through the portal. To choose the best sync tool for your situation, use the [Wizard to evaluate sync options.](https://aka.ms/EvaluateSyncOptions) ++Based on your environment and needs, you may qualify for moving to cloud sync. For a comparison of cloud sync and connect sync, see [Comparison between cloud sync and connect sync](cloud-sync/what-is-cloud-sync.md#comparison-between-azure-ad-connect-and-cloud-sync). To learn more, read [What is cloud sync?](cloud-sync/what-is-cloud-sync.md) and [What is the provisioning agent?](cloud-sync/what-is-provisioning-agent.md) ++## Migrating to Azure AD Connect V2 +If you aren't yet eligible to move to cloud sync, use this table for more information on migrating to V2. ++|Title|Description| +|--|--| +|[Information on deprecation](connect/deprecated-azure-ad-connect.md)|Information on Azure AD Connect V1 deprecation| +|[What is Azure AD Connect V2?](connect/whatis-azure-ad-connect-v2.md)|Information on the latest version of Azure AD Connect| +|[Upgrading from a previous version](connect/how-to-upgrade-previous-version.md)|Information on moving from one version of Azure AD Connect to another +++## Frequently asked questions ++++## Next steps ++- [What is Azure AD Connect V2?](whatis-azure-ad-connect-v2.md) +- [Azure AD Cloud Sync](../cloud-sync/what-is-cloud-sync.md) +- [Azure AD Connect version history](reference-connect-version-history.md) |
active-directory | Cross Tenant Synchronization Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-overview.md | In the target tenant: Cross-tenant sync relies on the Azure AD External Identiti Which clouds can cross-tenant synchronization be used in? -- Cross-tenant synchronization is supported within the commercial cloud. It is not supported within Azure Government or Azure China.+- Cross-tenant synchronization is supported within the commercial cloud and Azure Government. +- Cross-tenant synchronization isn't supported within the Azure China cloud. - Synchronization is only supported between two tenants in the same cloud. - Cross-cloud (such as public cloud to Azure Government) isn't currently supported. |
active-directory | Groups Role Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-role-settings.md | In the **Notifications** tab on the role settings page, Privileged Identity Mana - **Send emails to both default recipients and more recipients**<br>You can send emails to both default recipient and another recipient by selecting the default recipient checkbox and adding email addresses for other recipients. - **Critical emails only**<br>For each type of email, you can select the check box to receive critical emails only. What this means is that Privileged Identity Management will continue to send emails to the specified recipients only when the email requires an immediate action. For example, emails asking users to extend their role assignment will not be triggered while an email requiring admins to approve an extension request will be triggered. +## Manage role settings using Microsoft Graph ++To manage role settings for groups using PIM APIs in Microsoft Graph, use the [unifiedRoleManagementPolicy resource type and its related methods](/graph/api/resources/unifiedrolemanagementpolicy). ++In Microsoft Graph, role settings are referred to as rules and they're assigned to groups through container policies. You can retrieve all policies that are scoped to a group and for each policy, retrieve the associated collection of rules by using an `$expand` query parameter. The syntax for the request is as follows: ++```http +GET https://graph.microsoft.com/beta/policies/roleManagementPolicies?$filter=scopeId eq '{groupId}' and scopeType eq 'Group'&$expand=rules +``` ++For more information about managing role settings through PIM APIs in Microsoft Graph, see [Role settings and PIM](/graph/api/resources/privilegedidentitymanagement-for-groups-api-overview#policy-settings-in-pim-for-groups). For examples of updating rules, see [Update rules in PIM using Microsoft Graph](/graph/how-to-pim-update-rules). + ## Next steps - [Assign eligibility for a group (preview) in Privileged Identity Management](groups-assign-member-owner.md) |
active-directory | Pim How To Change Default Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-change-default-settings.md | You can send emails to both default recipient and another recipient by selecting - **Critical emails only**</br> For each type of email, you can select the check box to receive critical emails only. What this means is that Privileged Identity Management will continue to send emails to the specified recipients only when the email requires an immediate action. For example, emails asking users to extend their role assignment will not be triggered while emails requiring admins to approve an extension request will be triggered. -## Manage role settings through Microsoft Graph +## Manage role settings using Microsoft Graph -To manage settings for Azure AD roles through Microsoft Graph, use the [unifiedRoleManagementPolicy resource type and related methods](/graph/api/resources/unifiedrolemanagementpolicy). +To manage settings for Azure AD roles using PIM APIs in Microsoft Graph, use the [unifiedRoleManagementPolicy resource type and related methods](/graph/api/resources/unifiedrolemanagementpolicy). -In Microsoft Graph, role settings are referred to as rules and they're assigned to Azure AD roles through container policies. Each Azure AD role is assigned a specific policy object. You can retrieve all policies that are scoped to Azure AD roles and for each policy, retrieve the associated collection of rules through an `$expand` query parameter. The syntax for the request is as follows: +In Microsoft Graph, role settings are referred to as rules and they're assigned to Azure AD roles through container policies. Each Azure AD role is assigned a specific policy object. You can retrieve all policies that are scoped to Azure AD roles and for each policy, retrieve the associated collection of rules by using an `$expand` query parameter. The syntax for the request is as follows: ```http GET https://graph.microsoft.com/v1.0/policies/roleManagementPolicies?$filter=scopeId eq '/' and scopeType eq 'DirectoryRole'&$expand=rules ``` -Rules are grouped into containers. The containers are further broken down into rule definitions that are identified by unique IDs for easier management. For example, a **unifiedRoleManagementPolicyEnablementRule** container exposes three rule definitions identified by the following unique IDs. --+ `Enablement_Admin_Eligibility` - Rules that apply for admins to carry out operations on role eligibilities. For example, whether justification is required, and whether for all operations (for example, renewal, activation, or deactivation) or only for specific operations. -+ `Enablement_Admin_Assignment` - Rules that apply for admins to carry out operations on role assignments. For example, whether justification is required, and whether for all operations (for example, renewal, deactivation, or extension) or only for specific operations. -+ `Enablement_EndUser_Assignment` - Rules that apply for principals to enable their assignments. For example, whether multifactor authentication is required. ---To update these rule definitions, use the [update rules API](/graph/api/unifiedrolemanagementpolicyrule-update). For example, the following request specifies an empty **enabledRules** collection, therefore deactivating the enabled rules for a policy, such as multifactor authentication, ticketing information and justification. --```http -PATCH https://graph.microsoft.com/v1.0/policies/roleManagementPolicies/DirectoryRole_cab01047-8ad9-4792-8e42-569340767f1b_70c808b5-0d35-4863-a0ba-07888e99d448/rules/Enablement_EndUser_Assignment -{ - "@odata.type": "#microsoft.graph.unifiedRoleManagementPolicyEnablementRule", - "id": "Enablement_EndUser_Assignment", - "enabledRules": [], - "target": { - "caller": "EndUser", - "operations": [ - "all" - ], - "level": "Assignment", - "inheritableSettings": [], - "enforcedSettings": [] - } -} -``` --You can retrieve the collection of rules that are applied to all Azure AD roles or a specific Azure AD role through the [unifiedroleManagementPolicyAssignment resource type and related methods](/graph/api/resources/unifiedrolemanagementpolicyassignment). For example, the following request uses the `$expand` query parameter to retrieve the rules that are applied to an Azure AD role identified by **roleDefinitionId** or **templateId** `62e90394-69f5-4237-9190-012177145e10`. --```http -GET https://graph.microsoft.com/v1.0/policies/roleManagementPolicyAssignments?$filter=scopeId eq '/' and scopeType eq 'DirectoryRole' and roleDefinitionId eq '62e90394-69f5-4237-9190-012177145e10'&$expand=policy($expand=rules) -``` --For more information about managing role settings through PIM, see [Role settings and PIM](/graph/api/resources/privilegedidentitymanagementv3-overview#role-settings-and-pim). For examples of updating rules, see [Use PIM APIs in Microsoft Graph to update Azure AD rules](/graph/how-to-pim-update-rules). +For more information about managing role settings through PIM APIs in Microsoft Graph, see [Role settings and PIM](/graph/api/resources/privilegedidentitymanagementv3-overview#role-settings-and-pim). For examples of updating rules, see [Update rules in PIM using Microsoft Graph](/graph/how-to-pim-update-rules). ## Next steps |
active-directory | Github Enterprise Managed User Oidc Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/github-enterprise-managed-user-oidc-provisioning-tutorial.md | -> [GitHub Enterprise Managed User](https://docs.github.com/enterprise-cloud@latest/admin/authentication/managing-your-enterprise-users-with-your-identity-provider/about-enterprise-managed-users) is a feature of GitHub Enterprise Cloud which is different from GitHub Enterprise's standard OIDC SSO and user provisioning implementation. If you haven't specifically requested EMU instance, you have standard GitHub Enterprise Cloud plan. In that case, please refer to [the documentation](./github-provisioning-tutorial.md) to configure user provisioning in your non-EMU organisation. User provisioning is not supported for [GitHub Enteprise Accounts](https://docs.github.com/enterprise-cloud@latest/admin/overview/about-enterprise-accounts) +> [GitHub Enterprise Managed User (EMU)](https://docs.github.com/enterprise-cloud@latest/admin/authentication/managing-your-enterprise-users-with-your-identity-provider/about-enterprise-managed-users) is a different type of [GitHub Enteprise Account](https://docs.github.com/enterprise-cloud@latest/admin/overview/about-enterprise-accounts). If you haven't specifically requested EMU instance, you have a standard GitHub Enterprise Account. In that case, please refer to [the documentation](./github-provisioning-tutorial.md) to configure user provisioning in your non-EMU organisation. User provisioning is not supported for [standard GitHub Enteprise Accounts](https://docs.github.com/enterprise-cloud@latest/admin/overview/about-enterprise-accounts), but is supported for organisations under standard GitHub Enterprise Account. ## Capabilities Supported > [!div class="checklist"] |
active-directory | Kintone Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kintone-tutorial.md | In this section, you'll enable B.Simon to use Azure single sign-on by granting a a. In the **Login URL** textbox, paste the value of **Login URL** which you have copied from Azure portal. - b. In the **Logout URL** textbox, paste the value of **Logout URL** which you have copied from Azure portal. + b. In the **Logout URL** textbox, paste the value: `https://login.microsoftonline.com/common/wsfederation?wa=wsignout1.0`. c. Click **Browse** to upload your downloaded certificate file from Azure portal. |
aks | Azure Csi Disk Storage Provision | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-disk-storage-provision.md | Last updated 04/11/2023 # Create and use a volume with Azure Disks in Azure Kubernetes Service (AKS) -A persistent volume represents a piece of storage that has been provisioned for use with Kubernetes pods. A persistent volume can be used by one or many pods, and can be dynamically or statically provisioned. This article shows you how to dynamically create persistent volumes with Azure Disks in an Azure Kubernetes Service (AKS) cluster. +A persistent volume represents a piece of storage provisioned for use with Kubernetes pods. You can use a persistent volume with one or many pods, and you can provision it dynamically or statically. This article shows you how to dynamically create persistent volumes with Azure Disks in an Azure Kubernetes Service (AKS) cluster. > [!NOTE] > An Azure disk can only be mounted with *Access mode* type *ReadWriteOnce*, which makes it available to one node in AKS. This access mode still allows multiple pods to access the volume when the pods run on the same node. For more information, see [Kubernetes PersistentVolume access modes][access-modes]. A persistent volume represents a piece of storage that has been provisioned for This article shows you how to: * Work with a dynamic persistent volume (PV) by installing the Container Storage Interface (CSI) driver and dynamically creating one or more Azure managed disks to attach to a pod.-* Work with a static PV by creating one or more Azure managed disks, or use an existing one and attach it to a pod. +* Work with a static PV by creating one or more Azure managed disks or use an existing one and attach it to a pod. For more information on Kubernetes volumes, see [Storage options for applications in AKS][concepts-storage]. This section provides guidance for cluster administrators who want to provision |skuName | Azure Disks storage account type (alias: `storageAccountType`)| `Standard_LRS`, `Premium_LRS`, `StandardSSD_LRS`, `PremiumV2_LRS`, `UltraSSD_LRS`, `Premium_ZRS`, `StandardSSD_ZRS` | No | `StandardSSD_LRS`| |fsType | File System Type | `ext4`, `ext3`, `ext2`, `xfs`, `btrfs` for Linux, `ntfs` for Windows | No | `ext4` for Linux, `ntfs` for Windows| |cachingMode | [Azure Data Disk Host Cache Setting][disk-host-cache-setting] | `None`, `ReadOnly`, `ReadWrite` | No | `ReadOnly`|-|resourceGroup | Specify the resource group where the Azure Disks will be created | Existing resource group name | No | If empty, driver will use the same resource group name as current AKS cluster| -|DiskIOPSReadWrite | [UltraSSD disk][ultra-ssd-disks] IOPS Capability (minimum: 2 IOPS/GiB ) | 100~160000 | No | `500`| +|resourceGroup | Specify the resource group for the Azure Disks | Existing resource group name | No | If empty, driver uses the same resource group name as current AKS cluster| +|DiskIOPSReadWrite | [UltraSSD disk][ultra-ssd-disks] IOPS Capability (minimum: 2 IOPS/GiB) | 100~160000 | No | `500`| |DiskMBpsReadWrite | [UltraSSD disk][ultra-ssd-disks] Throughput Capability(minimum: 0.032/GiB) | 1~2000 | No | `100`| |LogicalSectorSize | Logical sector size in bytes for ultra disk. Supported values are 512 ad 4096. 4096 is the default. | `512`, `4096` | No | `4096`| |tags | Azure Disk [tags][azure-tags] | Tag format: `key1=val1,key2=val2` | No | ""| This section provides guidance for cluster administrators who want to provision ### Built-in storage classes -A storage class is used to define how a unit of storage is dynamically created with a persistent volume. For more information on Kubernetes storage classes, see [Kubernetes storage classes][kubernetes-storage-classes]. +Storage classes define how a unit of storage is dynamically created with a persistent volume. For more information on Kubernetes storage classes, see [Kubernetes storage classes][kubernetes-storage-classes]. -Each AKS cluster includes four pre-created storage classes, two of them configured to work with Azure Disks: +Each AKS cluster includes four precreated storage classes, two of them configured to work with Azure Disks: 1. The *default* storage class provisions a standard SSD Azure Disk.- * Standard storage is backed by Standard SSDs and delivers cost-effective storage while still delivering reliable performance. -1. The *managed-csi-premium* storage class provisions a premium Azure Disk. - * Premium disks are backed by SSD-based high-performance, low-latency disks. They're ideal for VMs running production workloads. When you use the Azure Disk CSI driver on AKS, you can also use the `managed-csi` storage class, which is backed by Standard SSD locally redundant storage (LRS). + * Standard SSDs backs Standard storage and delivers cost-effective storage while still delivering reliable performance. +2. The *managed-csi-premium* storage class provisions a premium Azure Disk. + * SSD-based high-performance, low-latency disks back Premium disks. They're ideal for VMs running production workloads. When you use the Azure Disk CSI driver on AKS, you can also use the `managed-csi` storage class, which is backed by Standard SSD locally redundant storage (LRS). It's not supported to reduce the size of a PVC (to prevent data loss). You can edit an existing storage class using the `kubectl edit sc` command, or you can create your own custom storage class. For example, if you want to use a disk of size 4 TiB, you must create a storage class that defines `cachingmode: None` because [disk caching isn't supported for disks 4 TiB and larger][disk-host-cache-setting]. For more information about storage classes and creating your own storage class, see [Storage options for applications in AKS][storage-class-concepts]. -Use the [kubectl get sc][kubectl-get] command to see the pre-created storage classes. The following example shows the pre-create storage classes available within an AKS cluster: +You can see the precreated storage classes using the [`kubectl get sc`][kubectl-get] command. The following example shows the precreated storage classes available within an AKS cluster: ```bash kubectl get sc managed-csi disk.csi.azure.com 1h ### Create a persistent volume claim -A persistent volume claim (PVC) is used to automatically provision storage based on a storage class. In this case, a PVC can use one of the pre-created storage classes to create a standard or premium Azure managed disk. +A persistent volume claim (PVC) automatically provisions storage based on a storage class. In this case, a PVC can use one of the precreated storage classes to create a standard or premium Azure managed disk. -1. Create a file named `azure-pvc.yaml`, and copy in the following manifest. The claim requests a disk named `azure-managed-disk` that is *5 GB* in size with *ReadWriteOnce* access. The *managed-csi* storage class is specified as the storage class. +1. Create a file named `azure-pvc.yaml` and copy in the following manifest. The claim requests a disk named `azure-managed-disk` that's *5 GB* in size with *ReadWriteOnce* access. The *managed-csi* storage class is specified as the storage class. ```yaml apiVersion: v1 A persistent volume claim (PVC) is used to automatically provision storage based > [!TIP] > To create a disk that uses premium storage, use `storageClassName: managed-csi-premium` rather than *managed-csi*. -2. Create the persistent volume claim with the [kubectl apply][kubectl-apply] command and specify your *azure-pvc.yaml* file: +2. Create the persistent volume claim using the [`kubectl apply`][kubectl-apply] command and specify your *azure-pvc.yaml* file. ```bash kubectl apply -f azure-pvc.yaml A persistent volume claim (PVC) is used to automatically provision storage based ### Use the persistent volume -Once the persistent volume claim has been created and the disk successfully provisioned, a pod can be created with access to the disk. The following manifest creates a basic NGINX pod that uses the persistent volume claim named *azure-managed-disk* to mount the Azure Disk at the path `/mnt/azure`. For Windows Server containers, specify a *mountPath* using the Windows path convention, such as *'D:'*. +After you create the persistent volume claim, you must verify it has a status of `Pending`. The `Pending` status indicates it's ready to be used by a pod. -1. Create a file named `azure-pvc-disk.yaml`, and copy in the following manifest: +1. Verify the status of the PVC using the `kubectl describe pvc` command. ++ ```bash + kubectl describe pvc azure-managed-disk + ``` ++ The output of the command resembles the following condensed example: ++ ```output + Name: azure-managed-disk + Namespace: default + StorageClass: managed-csi + Status: Pending + [...] + ``` ++2. Create a file named `azure-pvc-disk.yaml` and copy in the following manifest. This manifest creates a basic NGINX pod that uses the persistent volume claim named *azure-managed-disk* to mount the Azure Disk at the path `/mnt/azure`. For Windows Server containers, specify a *mountPath* using the Windows path convention, such as *'D:'*. ```yaml kind: Pod Once the persistent volume claim has been created and the disk successfully prov claimName: azure-managed-disk ``` -2. Create the pod with the [kubectl apply][kubectl-apply] command, as shown in the following example: +3. Create the pod using the [`kubectl apply`][kubectl-apply] command. ```bash kubectl apply -f azure-pvc-disk.yaml Once the persistent volume claim has been created and the disk successfully prov pod/mypod created ``` -3. You now have a running pod with your Azure Disk mounted in the `/mnt/azure` directory. This configuration can be seen when inspecting your pod using the [kubectl describe][kubectl-describe] command, as shown in the following condensed example: +4. You now have a running pod with your Azure Disk mounted in the `/mnt/azure` directory. Check the pod configuration using the [`kubectl describe`][kubectl-describe] command. ```bash kubectl describe pod mypod This section provides guidance for cluster administrators who want to create one ### Create an Azure disk -When you create an Azure disk for use with AKS, you can create the disk resource in the **node** resource group. This approach allows the AKS cluster to access and manage the disk resource. If instead you created the disk in a separate resource group, you must grant the Azure Kubernetes Service (AKS) managed identity for your cluster the `Contributor` role to the disk's resource group. In this exercise, you're going to create the disk in the same resource group as your cluster. +When you create an Azure disk for use with AKS, you can create the disk resource in the **node** resource group. This approach allows the AKS cluster to access and manage the disk resource. If you instead create the disk in a separate resource group, you must grant the Azure Kubernetes Service (AKS) managed identity for your cluster the `Contributor` role to the disk's resource group. -1. Identify the resource group name using the [az aks show][az-aks-show] command and add the `--query nodeResourceGroup` parameter. The following example gets the node resource group for the AKS cluster name *myAKSCluster* in the resource group name *myResourceGroup*: +1. Identify the resource group name using the [`az aks show`][az-aks-show] command and add the `--query nodeResourceGroup` parameter. ```azurecli-interactive az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv + # Output MC_myResourceGroup_myAKSCluster_eastus ``` -2. Create a disk using the [az disk create][az-disk-create] command. Specify the node resource group name obtained in the previous command, and then a name for the disk resource, such as *myAKSDisk*. The following example creates a *20*GiB disk, and outputs the ID of the disk after it's created. If you need to create a disk for use with Windows Server containers, add the `--os-type windows` parameter to correctly format the disk. +2. Create a disk using the [`az disk create`][az-disk-create] command. Specify the node resource group name and a name for the disk resource, such as *myAKSDisk*. The following example creates a *20*GiB disk, and outputs the ID of the disk after it's created. If you need to create a disk for use with Windows Server containers, add the `--os-type windows` parameter to correctly format the disk. ```azurecli-interactive az disk create \ When you create an Azure disk for use with AKS, you can create the disk resource > [!NOTE] > Azure Disks are billed by SKU for a specific size. These SKUs range from 32GiB for S4 or P4 disks to 32TiB for S80 or P80 disks (in preview). The throughput and IOPS performance of a Premium managed disk depends on both the SKU and the instance size of the nodes in the AKS cluster. See [Pricing and Performance of Managed Disks][managed-disk-pricing-performance]. - The disk resource ID is displayed once the command has successfully completed, as shown in the following example output. This disk ID is used to mount the disk in the next section. + The disk resource ID is displayed once the command has successfully completed, as shown in the following example output. You use the disk ID to mount the disk in the next section. ```output /subscriptions/<subscriptionID>/resourceGroups/MC_myAKSCluster_myAKSCluster_eastus/providers/Microsoft.Compute/disks/myAKSDisk When you create an Azure disk for use with AKS, you can create the disk resource ### Mount disk as a volume -1. Create a *pv-azuredisk.yaml* file with a *PersistentVolume*. Update `volumeHandle` with disk resource ID from the previous step. For example: +1. Create a *pv-azuredisk.yaml* file with a *PersistentVolume*. Update `volumeHandle` with disk resource ID from the previous step. ```yaml apiVersion: v1 When you create an Azure disk for use with AKS, you can create the disk resource fsType: ext4 ``` -2. Create a *pvc-azuredisk.yaml* file with a *PersistentVolumeClaim* that uses the *PersistentVolume*. For example: +2. Create a *pvc-azuredisk.yaml* file with a *PersistentVolumeClaim* that uses the *PersistentVolume*. ```yaml apiVersion: v1 When you create an Azure disk for use with AKS, you can create the disk resource storageClassName: managed-csi ``` -3. Use the [kubectl apply][kubectl-apply] commands to create the *PersistentVolume* and *PersistentVolumeClaim*, referencing the two YAML files created earlier: +3. Create the *PersistentVolume* and *PersistentVolumeClaim* using the [`kubectl apply`][kubectl-apply] command and reference the two YAML files you created. ```bash kubectl apply -f pv-azuredisk.yaml kubectl apply -f pvc-azuredisk.yaml ``` -4. To verify your *PersistentVolumeClaim* is created and bound to the *PersistentVolume*, run the -following command: +4. Verify your *PersistentVolumeClaim* is created and bound to the *PersistentVolume* using the `kubectl get pvc` command. ```bash kubectl get pvc pvc-azuredisk following command: pvc-azuredisk Bound pv-azuredisk 20Gi RWO 5s ``` -5. Create a *azure-disk-pod.yaml* file to reference your *PersistentVolumeClaim*. For example: +5. Create a *azure-disk-pod.yaml* file to reference your *PersistentVolumeClaim*. ```yaml apiVersion: v1 following command: claimName: pvc-azuredisk ``` -6. Run the [kubectl apply][kubectl-apply] command to apply the configuration and mount the volume, referencing the YAML -configuration file created in the previous steps: +6. Apply the configuration and mount the volume using the [`kubectl apply`][kubectl-apply] command. ```bash kubectl apply -f azure-disk-pod.yaml ``` +## Clean up resources ++When you're done with the resources created in this article, you can remove them using the `kubectl delete` command. ++```bash +# Remove the pod +kubectl delete -f azure-pvc-disk.yaml ++# Remove the persistent volume claim +kubectl delete -f azure-pvc.yaml +``` + ## Next steps -- To learn how to use CSI driver for Azure Disks storage, see [Use Azure Disks storage with CSI driver][azure-disks-storage-csi].-- For associated best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].+* To learn how to use CSI driver for Azure Disks storage, see [Use Azure Disks storage with CSI driver][azure-disks-storage-csi]. +* For associated best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage]. <!-- LINKS - external --> [access-modes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply [kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get [kubernetes-storage-classes]: https://kubernetes.io/docs/concepts/storage/storage-classes/-[kubernetes-volumes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/ [managed-disk-pricing-performance]: https://azure.microsoft.com/pricing/details/managed-disks/ [kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe <!-- LINKS - internal --> [azure-storage-account]: ../storage/common/storage-introduction.md [azure-disks-storage-csi]: azure-disk-csi.md-[azure-files-pvc]: azure-files-dynamic-pv.md -[az-disk-list]: /cli/azure/disk#az_disk_list -[az-snapshot-create]: /cli/azure/snapshot#az_snapshot_create [az-disk-create]: /cli/azure/disk#az_disk_create-[az-disk-show]: /cli/azure/disk#az_disk_show [az-aks-show]: /cli/azure/aks#az-aks-show [install-azure-cli]: /cli/azure/install-azure-cli [operator-best-practices-storage]: operator-best-practices-storage.md |
aks | Nat Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/nat-gateway.md | Windows enables OutboundNAT by default. You can now manually disable OutboundNAT 1. Install or update `aks-preview` using the [`az extension add`][az-extension-add] or [`az extension update`][az-extension-update] command. - ```azurecli - # Install aks-preview + ```azurecli + # Install aks-preview - az extension add --name aks-preview + az extension add --name aks-preview - # Update aks-preview + # Update aks-preview - az extension update --name aks-preview - ``` + az extension update --name aks-preview + ``` 2. Register the feature flag using the [`az feature register`][az-feature-register] command. - ```azurecli - az feature register --namespace Microsoft.ContainerService --name DisableWindowsOutboundNATPreview - ``` + ```azurecli + az feature register --namespace Microsoft.ContainerService --name DisableWindowsOutboundNATPreview + ``` 3. Check the registration status using the [`az feature list`][az-feature-list] command. - ```azurecli - az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/DisableWindowsOutboundNATPreview')].{Name:name,State:properties.state}" - ``` + ```azurecli + az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/DisableWindowsOutboundNATPreview')].{Name:name,State:properties.state}" + ``` 4. Refresh the registration of the `Microsoft.ContainerService` resource provider us - ```azurecli - az provider register --namespace Microsoft.ContainerService - ``` + ```azurecli + az provider register --namespace Microsoft.ContainerService + ``` * Your clusters must have a managed NAT gateway (which may increase the overall cost). * If you're using Kubernetes version 1.25 or older, you need to [update your deployment configuration][upgrade-kubernetes]. |
aks | Node Auto Repair | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-auto-repair.md | In many cases, AKS can determine if a node is unhealthy and attempt to repair th * A node status isn't being reported due to error in network configuration. * A node failed to initially register as a healthy node. +Node Autodrain is a best effort service and cannot be guaranteed to operate perfectly in all scenarios ## Next steps Use [availability zones][availability-zones] to increase high availability with your AKS cluster workloads. |
app-service | Webjobs Dotnet Deploy Vs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/webjobs-dotnet-deploy-vs.md | Some of the fields in this dialog box correspond to fields on the **Add WebJob** WebJob deployment information: -* For information about command-line deployment, see [Enabling Command-line or Continuous Delivery of Azure WebJobs](https://azure.microsoft.com/blog/2014/08/18/enabling-command-line-or-continuous-delivery-of-azure-webjobs/). +* For information about command-line deployment, see [Enabling Command-line or Continuous Delivery of Azure WebJobs](https://azure.microsoft.com/blog/enabling-command-line-or-continuous-delivery-of-azure-webjobs/). * If you deploy a WebJob, and then decide you want to change the type of WebJob and redeploy, delete the *webjobs-publish-settings.json* file. Doing so causes Visual Studio to redisplay the publishing options, so you can change the type of WebJob. If you enable **Always on** in Azure, you can use Visual Studio to change the We ## Next steps > [!div class="nextstepaction"]-> [Learn more about the WebJobs SDK](webjobs-sdk-how-to.md) +> [Learn more about the WebJobs SDK](webjobs-sdk-how-to.md) |
azure-app-configuration | Concept Config File | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-config-file.md | Key Vault references require a particular content type during importing, so you Run the following CLI command to import it with the `test` label and the Key Vault reference content type. ```azurecli-interactive-az appconfig kv import --label test --content-type application/vnd.microsoft.appconfig.keyvaultref+json;charset=utf-8 --name <your store name> --source file --path keyvault-refs.json --format json +az appconfig kv import --label test --content-type "application/vnd.microsoft.appconfig.keyvaultref+json;charset=utf-8" --name <your store name> --source file --path keyvault-refs.json --format json ``` The following table shows all the imported data in your App Configuration store. |
azure-cache-for-redis | Cache Azure Active Directory For Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-azure-active-directory-for-authentication.md | Although access key authentication is simple, it comes with a set of challenges Azure Cache for Redis offers a password-free authentication mechanism by integrating with [Azure Active Directory](/azure/active-directory/fundamentals/active-directory-whatis). This integration also includes [role-based access control](/azure/role-based-access-control/) functionality provided through [access control lists (ACLs)](https://redis.io/docs/management/security/acl/) supported in open source Redis. -> [!IMPORTANT] -> The updates to Azure Cache for Redis that enable Azure Active Directory for authentication are available only in East US region. - To use the ACL integration, your client application must assume the identity of an Azure Active Directory entity, like service principal or managed identity, and connect to your cache. In this article, you learn how to use your service principal or managed identity to connect to your cache, and how to grant your connection predefined permissions based on the Azure AD artifact being used for the connection. ## Scope of availability |
azure-cache-for-redis | Cache Configure Role Based Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-configure-role-based-access-control.md | Managing access to your Azure Cache for Redis instance is critical to ensure tha Azure Cache for Redis now integrates this ACL functionality with Azure Active Directory (Azure AD) to allow you to configure your Data Access Policies for your application's service principal and managed identity. -> [!IMPORTANT] -> The updates to Azure Cache for Redis that enable Azure Active Directory for role-based access control are available only in East US region. - Azure Cache for Redis offers three built-in access policies: _Owner_, _Contributor_, and _Reader_. If the built-in access policies don't satisfy your data protection and isolation requirements, you can create and use your own custom data access policy as described in [Configure custom data access policy](#configure-a-custom-data-access-policy-for-your-application). ## Scope of availability |
azure-cache-for-redis | Cache Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-whats-new.md | +## June 2023 ++Azure Active Directory for authentication and role-based access control are available across regions that support Azure Cache for Redis. + ## May 2023 ### Azure Active Directory-based authentication and authorization (preview) Azure Active Directory (Azure AD) based [authentication and authorization](cache-azure-active-directory-for-authentication.md) is now available for public preview with Azure Cache for Redis. With this Azure AD integration, users can connect to their cache instance without an access key and use [role-based access control](cache-configure-role-based-access-control.md) to connect to their cache instance. -> [!IMPORTANT] -> The updates to Azure Cache for Redis that enable both Azure Active Directory for authentication and role-based access control are available only in East US region. - This feature is available for Azure Cache for Redis Basic, Standard, and Premium SKUs. With this update, customers can look forward to increased security and a simplified authentication process when using Azure Cache for Redis. ### Support for up to 30 shards for clustered Azure Cache for Redis instances |
azure-functions | Functions Bindings Service Bus Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-trigger.md | app = func.FunctionApp() @app.function_name(name="ServiceBusQueueTrigger1") @app.service_bus_queue_trigger(arg_name="msg", queue_name="<QUEUE_NAME>", - connection="<CONNECTION_SETTING">) + connection="<CONNECTION_SETTING>") def test_function(msg: func.ServiceBusMessage): logging.info('Python ServiceBus queue trigger processed message: %s', msg.get_body().decode('utf-8')) |
azure-government | Documentation Accelerate Compliance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/documentation-accelerate-compliance.md | Title: How to accelerate your journey to compliance with Azure -description: Provides an overview of resources for Development, Automation, and Advisory partners and how they can accelerate their path to ATO with Azure + Title: How to accelerate your journey to FedRAMP compliance with Azure +description: Provides an overview of resources for Development, Automation, and Advisory partners to help them accelerate their path to ATO with Azure. cloud: gov na Last updated 05/30/2023 - -# Compliance program overview -Accelerating your path to compliance in Azure is a focused program that targets the provisioning of learning resources and implementation tools by educating, providing architectural references, and support during the scoping and implementation of your project. In addition, we work with key assessment and automation partners to share reference architectures, solutions, alternatives both first party and third party that can help you meet your compliance needs. +# FedRAMP compliance program overview -As a partner who provides a service in this field, you can publish your offering in the marketplace that will expand the reach of your services. +Accelerating your path to the US Federal Risk and Authorization Management Program (FedRAMP) compliance in Azure is a focused program that provides learning resources and implementation tools. The goal of the program is education and support during the scoping and implementation of your project. Moreover, Microsoft works with key assessment and automation partners to share reference architectures and solutions that can help you meet your compliance needs. -## Customers +As a partner who provides a service in this field, you can publish your offering in the marketplace that expands the reach of your service. -The US Government, as well as many other organizations, relies on commercial software companies to achieve its mission. As part of the procurement and consumption processes, the Authority to Operate (ATO) was implemented to ensure that the development, use, and operation of such commercial software and platforms, is done in accordance with security and data protection necessary to safeguard government information. While the process has the best intentions, the inherent complexity creates a long and expensive project that discourages many Independent Software Vendors (ISVs) to go down this path. +## Customers -The adoption of cloud technologies by the Federal Government is predicated on the Federal Risk Authorization Management Program (FedRAMP). This is a government-wide program that provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services. This approach uses a ΓÇ£do once, use many timesΓÇ¥ framework that saves cost, time, and staff required to conduct redundant Agency security assessments. The program is based on the NIST SP 800-53 security controls. +US Government agencies and many other organizations rely on commercial software companies to achieve their missions. FedRAMP was established to provide a standardized approach for assessing, monitoring, and authorizing cloud computing products and services. This approach uses a ΓÇ£do once, use many timesΓÇ¥ framework that saves cost, time, and resources required to conduct individual agency security assessments. FedRAMP is based on the National Institute of Standards and Technology (NIST) SP 800-53 standard, augmented by FedRAMP controls and control enhancements. There are two types of FedRAMP authorizations for cloud - * A Provisional Authority to Operate (P-ATO) through the FedRAMP Joint Authorization Board (JAB) - * An Agency Authority to Operate (ATO) +- A Provisional Authority to Operate (P-ATO) issued by the FedRAMP Joint Authorization Board (JAB) +- An agency Authority to Operate (ATO) ### P-ATO process -A FedRAMP P-ATO is an initial approval of the cloud service provider (CSP) authorization package by the JAB that an Agency can leverage to grant an ATO for the acquisition and use of the cloud service within their Agency. The JAB consists of the Chief Information Officers (CIOs) from DoD, DHS, and GSA, supported by designated technical representatives (TRs) from their respective member organizations. A P-ATO means that the JAB has reviewed the cloud serviceΓÇÖs authorization package and provided a provisional approval for Federal Agencies to leverage when granting an ATO for a cloud system. For a cloud service to enter the JAB process, it must first be prioritized through FedRAMP Connect. +A FedRAMP P-ATO is an initial approval of the cloud service provider (CSP) authorization package by the JAB. An agency can rely on P-ATO to grant an ATO for the acquisition and use of the cloud service within their agency. The JAB consists of the Chief Information Officers (CIOs) from the US Department of Defense (DoD), Department of Homeland Security (DHS), and General Services Administration (GSA), supported by designated technical representatives (TRs) from their respective member organizations. A P-ATO means that the JAB has reviewed the cloud serviceΓÇÖs authorization package and provided a provisional approval for federal agencies to use when granting an ATO for a cloud services offering. ### Agency ATO process -As part of the Agency authorization process, a CSP works directly with the Agency sponsor who reviews the cloud serviceΓÇÖs security package. After completing a security assessment, the head of an Agency (or their designee) can grant an ATO. +As part of the agency authorization process, a CSP works directly with the agency sponsor who reviews the cloud serviceΓÇÖs security package. After completing a security assessment, the head of an agency (or their designee) can grant an ATO. -Taking the above into consideration, an ISV can choose to go for JAB authorization, which grants a generalized authorization to its solution and can be used with multiple agencies. This process tends to be longer. They can also choose to go for an Agency ATO, which is specific to the Government customer they are serving. This customer acts as the sponsor and may even have ΓÇ£reciprocityΓÇ¥ with other agencies which allows for a faster, smoother adoption of the companyΓÇÖs solution with a different customer. +Consequently, an ISV can choose to go for a JAB authorization, which grants a generalized authorization to its solution and can be used with multiple agencies. This process tends to be longer. They can also choose to go for an agency ATO, which is specific to the Government customer they're serving. This customer acts as the sponsor and may even have ΓÇ£reciprocityΓÇ¥ with other agencies, which allows for a faster, smoother adoption of the companyΓÇÖs solution with a different customer. ## Partners -Microsoft is able to scale through its partners. Scale is what will allow us to create a more predictable, cost-effective, and speedy delivery. These concerns are also common with perusing an ATO. We are focusing on enabling two main kinds of partnerships: +Microsoft is able to scale through its partners. Scale is what allows us to create a more predictable, cost-effective, and speedy delivery. These concerns are also common with pursuing an ATO. We're focused on enabling two main kinds of partnerships: -- **Advisory:** enables partners to create offerings based on Azure that shepherd a customer through individual steps or the entire ATO process. These partners offer consulting services bundled with some automated solutions that add value to what Azure Compliance Launchpad provides. They can usually be contracted directly, by reference, or via the Marketplace. +- **Advisory:** enables partners to create offerings based on Azure that guide a customer through individual steps or the entire ATO process. These partners offer consulting services bundled with some automated solutions that add value to what Azure Compliance Launchpad provides. They can usually be contracted directly, by reference, or via Microsoft Azure Marketplace. - **Automation:** there are two types of automation partners we focus on:- - Foundational partners, which enable integrated 3rd party solutions with Azure and help you achieve / meet controls from your FedRAMP Package. These partners are part of our recommended reference architectures. - - True automation partners that help automate certain aspects of the ATO journey such as the System Security Plan (SSP) generation, self-healing, alerts, and monitoring. + - Foundational partners, which enable integration of third party solutions with Azure and help you achieve / meet controls from your FedRAMP package. These partners are part of our recommended reference architectures. + - True automation partners that help automate certain aspects of the ATO journey such as the FedRAMP System Security Plan (SSP) generation, self-healing, alerts, and monitoring. - > [!NOTE] -> Partners are asked to publish their solutions to Microsoft Azure Marketplace. Steps on how to achieve that are presented below. +> [!NOTE] +> Partners are asked to publish their solutions to Azure Marketplace. See the following steps for guidance. ## Publishing to Azure Marketplace -1. Join the Partner Network - ItΓÇÖs a requirement for publishing but easy to sign up. Instructions are located here: [Ensure you have a MCPP ID and Partner Center Account](../../marketplace/create-account.md#create-a-partner-center-account-and-enroll-in-the-commercial-marketplace). -2. Enable your partner center account as Publisher / Developer for Marketplace, follow the instructions [here](../../marketplace/create-account.md). -3. With an enabled Partner Center Account, publish listing as a SaaS App as instructed [here](../../marketplace/create-new-saas-offer.md). +1. Join the Partner Network ΓÇô ItΓÇÖs a requirement for publishing but easy to sign up. For instructions, see [Create a Partner Center account and enroll in the commercial marketplace](../../marketplace/create-account.md#create-a-partner-center-account-and-enroll-in-the-commercial-marketplace). +2. Enable your partner center account as Publisher / Developer for Marketplace by following the instructions in [Create a commercial marketplace account in Partner Center](../../marketplace/create-account.md). +3. With an enabled Partner Center Account, publish your listing as a SaaS application as explained in [Create a SaaS offer](../../marketplace/create-new-saas-offer.md). -For a list of existing Azure Marketplace offerings in this space, visit [this page](https://aka.ms/azclmarketplace). +For a list of existing Azure Marketplace offerings in this space, visit [Azure Marketplace](https://aka.ms/azclmarketplace). -## Additional resources +## More resources - > [!NOTE] ->The information provided here will allow partners and customers to sign up and learn about the compliance program. The program is designed to help Azure and Azure Government customers successfully prepare their environments for authorization and request a FedRAMP ATO. This information does not constitute an offer of any kind, and submitting the forms below in no way guarantees participation in the program. At this time, the program details shared with partners and customers are notional and subject to change without notice. +> [!NOTE] +> The information provided here will allow you to sign up and learn about the FedRAMP compliance program. The program is designed to help Azure and Azure Government customers successfully prepare their environments for authorization and request a FedRAMP ATO. This information does not constitute an offer of any kind, and submitting the following forms in no way guarantees participation in the program. Currently, the program details shared with partners and customers are notional and subject to change without notice. - * Free [training on FedRAMP](https://www.fedramp.gov/training/). - * FedRAMP [templates](https://www.fedramp.gov/templates/) to help you with program requirements. - * Get familiar with the [FedRAMP Marketplace](https://marketplace.fedramp.gov/#/products). - * Learn more about [Azure Compliance Offerings per market and industry](https://learn.microsoft.com/azure/compliance/). +- [FedRAMP training resources](https://www.fedramp.gov/training/). +- [FedRAMP documents and templates](https://www.fedramp.gov/documents-templates/) to help you with program requirements. +- Get familiar with the [FedRAMP Marketplace](https://marketplace.fedramp.gov/#/products). +- Learn more about [Azure Government compliance](../documentation-government-plan-compliance.md). ## Next steps-Review the documentation above. -Review the Azure Marketplace [Publishing guide by offer type](https://learn.microsoft.com/partner-center/marketplace/publisher-guide-by-offer-type) for further tips and troubleshooting. -If you are still facing issues, open a ticket in Partner Center. ++Review the [Publishing guide by offer type](/partner-center/marketplace/publisher-guide-by-offer-type) for further tips and troubleshooting. If you're still facing issues, open a ticket in Partner Center. |
azure-government | Documentation Government Csp Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-application.md | Azure Government is available for purchase via different channels, one of them b ## Becoming a government CSP -Before being able to apply for CSP or any other programs that run under the Microsoft Partner Network, you need to obtain a Microsoft Partner Network ID (MPN ID). For more information, visit the [Microsoft Partner Network](https://partner.microsoft.com/cloud-solution-provider/get-started) page. After you fulfill the basic requirement of becoming a Microsoft Partner, you are ready to become a Government CSP. To initiate your application, see [Sell cloud services to the US government](https://partner.microsoft.com/membership/cloud-solution-provider/cloud-for-government). +Before being able to apply for CSP or any other programs that run under the Microsoft Partner Network, you need to obtain a Microsoft Partner Network ID (MPN ID). For more information, visit the [Microsoft Partner Network](https://partner.microsoft.com/cloud-solution-provider/get-started) page. After you fulfill the basic requirement of becoming a Microsoft Partner, you're ready to become a Government CSP. To initiate your application, see [Sell cloud services to the US government](https://partner.microsoft.com/membership/cloud-solution-provider/cloud-for-government). -[Azure Government](./documentation-government-welcome.md) is a physically isolated instance of Azure that delivers services with world-class security and compliance critical to the US government. These services maintain FedRAMP and DoD authorizations, CJIS state-level agreements, support for IRS 1075, ability to sign a HIPAA Business Associate Agreement, and much more. Azure Government provides an extra layer of protection to customers through contractual commitments regarding storage of customer data in the United States and limiting potential access to systems processing customer data to screened US persons. Most US government agencies and their partners are best aligned with Azure Government. +[Azure Government](./documentation-government-welcome.md) is a physically isolated instance of Azure that delivers services with world-class security and compliance critical to the US government. These services maintain FedRAMP and DoD authorizations, CJIS state-level agreements, support for IRS 1075, ability to sign a HIPAA Business Associate Agreement, and others. For more information, see [Azure Government compliance](./documentation-government-plan-compliance.md). Azure Government provides an extra layer of protection to customers through contractual commitments regarding storage of customer data in the United States. Moreover, Azure Government limits potential access to systems processing customer data to screened US persons. Most US government agencies and their partners are best aligned with Azure Government. ## Obtaining your government tenant -The process begins with a request for an Azure Government tenant. For more information and to begin the validation, see [Microsoft Azure Government trial](https://azure.microsoft.com/global-infrastructure/government/request/?ReqType=CSP). Once complete, you should receive a tenant to activate your enrollment in the Cloud Solution Provider program for the US government. Validation steps are shown below: +The process begins with a request for an Azure Government tenant. For more information and to begin the validation, see [Microsoft Azure Government trial](https://azure.microsoft.com/global-infrastructure/government/request/?ReqType=CSP). Once complete, you should receive a tenant to activate your enrollment in the Cloud Solution Provider program for the US government. The following validation steps are in place: -- Be enrolled in the [Microsoft Cloud Partner Program](/partner-center/mpn-overview) (have a MCPP ID).+- Be enrolled in the [Microsoft Cloud Partner Program](/partner-center/mpn-overview) (have an MCPP ID). - Verification of legitimacy of the Company, Systems Integrator, Distributor, or Independent Software Vendor (ISV) applying for the tenant.-- Verification of business engagements with government customers (for example, proof of services rendered to government agencies, statements of works, evidence of being part of GSA Schedule).+- Verification of business engagements with government customers (for example, proof of services rendered to government agencies, statements of work, evidence of being part of GSA Schedule). - If you already have an Azure Government tenant, you can use your existing credentials to complete the CSP application. - Ensure that emails coming from [US Government Cloud Eligibility](mailto:usgcce@microsoft.com) are reaching your Inbox. - Check your spam or junk email folder as credentials and asks for further info come from this alias. ## Applying for government CSP -Once you have the credentials described above, navigate to [Partner Center for Microsoft US Government Cloud](https://partner.microsoft.com/pcv/register/joinnow/enrollmentwelcome/ResellerNationalCloud/migrate?cloudInstance=UnitedStatesGovernment) to apply for the CSP Government reseller program. It takes 5-6 days to process the application, and, once approved, you should receive an email to log in to [Partner Center](https://partner.microsoft.com/dashboard/home) to accept the Terms and Conditions. For more information, see [Partner Center documentation](/partner-center/overview). +Once you have the credentials described previously, navigate to [Partner Center for Microsoft US Government Cloud](/partner-center/partner-center-for-microsoft-us-govt-cloud) to apply for the CSP Government reseller program. It takes 5-6 days to process the application, and, once approved, you should receive an email to log in to [Partner Center](https://partner.microsoft.com/dashboard/home) to accept the Terms and Conditions. For more information, see [Partner Center documentation](/partner-center/overview). > [!NOTE]-> Terms and Conditions are not negotiable for the Cloud Solution Provider program. If you wish to discuss customer terms that you have in place for your Commercial agreement, contact your Microsoft account representative. +> Terms and Conditions aren't negotiable for the Cloud Solution Provider program. If you wish to discuss customer terms that you have in place for your Commercial agreement, contact your Microsoft account representative. The application process includes: The application process includes: - Estimation of potential revenue - Company validation via Dun and Bradstreet - Email verification-- Verification of an active enrollment in the Advanced Support for Partners program or Prmier Support for Partners program. More information [here](https://partner.microsoft.com/support/partnersupport).+- Verification of an active enrollment in the Advanced Support for Partners program or Premier Support for Partners program. More information is available from [Compare partner support plans](https://partner.microsoft.com/support/partnersupport). - Acceptance of [Terms and Conditions](https://download.microsoft.com/download/2/C/8/2C8CAC17-FCE7-4F51-9556-4D77C7022DF5/MCRA2018_AOC_USGCC_ENG_Feb2019_CR.pdf) -After the validation has been completed and terms have been signed, you are ready to transact. For more information on billing, see [Azure plan](/partner-center/azure-plan-lp). +After the validation has been completed and terms have been signed, you're ready to transact. For more information on billing, see [Azure plan](/partner-center/azure-plan-lp). ## Extra resources After the validation has been completed and terms have been signed, you are read - Agreements for end customers and partners in the CSP program are located on [CSP Resources](/partner-center/csp-documents-and-learning-resources). The customer agreement to be flown down is the [Microsoft Customer Agreement](/partner-center/agreements) (MCA). - For a list of available services, see [Azure services available in the CSP program](/partner-center/azure-plan-available). - Get your questions answered by visiting [FAQ for Partner Center](/partner-center/faq-for-us-govt-cloud).-- If you are still unclear about CSP or are looking to apply for the commercial side of the program, see [Enroll in the CSP program](/partner-center/enrolling-in-the-csp-program).-- If you are interested in Office 365 GCC for CSP, which is transacted via the CSP for Commercial platform, see [Sell Office 365 Government GCC for CSP subscriptions to qualified customers](/partner-center/csp-gcc-overview).+- If you're still unclear about CSP or are looking to apply for the commercial side of the program, see [Enroll in the CSP program](/partner-center/enrolling-in-the-csp-program). +- If you're interested in Office 365 GCC for CSP, which is transacted via the CSP for Commercial platform, see [Sell Office 365 Government GCC for CSP subscriptions to qualified customers](/partner-center/csp-gcc-overview). ## Next steps -Once you have onboarded and are ready to create your first customer, make sure to review [Resources for building your Government CSP practice](https://devblogs.microsoft.com/azuregov/resources-for-building-your-government-csp-practice/). To review further documentation please visit the FAQ located [here](/partner-center/faq-for-us-govt-cloud). For all other questions, please open a ticket within Partner Center. +Once you have onboarded and are ready to create your first customer, make sure to review [Resources for building your Government CSP practice](https://devblogs.microsoft.com/azuregov/resources-for-building-your-government-csp-practice/). To review further documentation, visit the [FAQ](/partner-center/faq-for-us-govt-cloud). For all other questions, open a ticket in Partner Center. |
azure-government | Documentation Government Overview Itar | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-itar.md | -To help you navigate export control rules, Microsoft has published the [Microsoft Azure Export Controls](https://aka.ms/Azure-Export-Paper) whitepaper. It describes US export controls particularly as they apply to software and technical data, reviews potential sources of export control risks, and offers specific guidance to help you assess your obligations under these controls. +To help you navigate export control rules, Microsoft has published the [Microsoft Azure Export Controls](https://aka.ms/Azure-Export-Paper) whitepaper. It describes US export controls particularly as they apply to software and technical data, reviews potential sources of export control risks, and offers specific guidance to help you assess your obligations under these controls. Extra information is available from the Cloud Export FAQ, which is accessible from Frequently Asked Questions on [Exporting Microsoft products](https://www.microsoft.com/exporting). > [!NOTE] > **Disclaimer:** You're wholly responsible for ensuring your own compliance with all applicable laws and regulations. Information provided in this article doesn't constitute legal advice, and you should consult your legal advisor for any questions regarding regulatory compliance. Learn more about: - [Microsoft government solutions](https://www.microsoft.com/enterprise/government) - [What is Azure Government?](./documentation-government-welcome.md) - [Explore Azure Government](https://azure.microsoft.com/global-infrastructure/government/)+- [Exporting Microsoft products](https://www.microsoft.com/exporting) - [Azure Government compliance](./documentation-government-plan-compliance.md) - [Azure EAR compliance offering](/azure/compliance/offerings/offering-ear)-- [Azure FedRAMP compliance offering](/azure/compliance/offerings/offering-fedramp) - [Azure ITAR compliance offering](/azure/compliance/offerings/offering-itar) - [Azure DoE 10 CFR Part 810 compliance offering](/azure/compliance/offerings/offering-doe-10-cfr-part-810)+- [Azure FedRAMP compliance offering](/azure/compliance/offerings/offering-fedramp) |
azure-monitor | Agents Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md | The tables below provide a comparison of Azure Monitor Agent with the legacy the | | VM Insights | X (Public preview) | X | | | | Microsoft Defender for Cloud | X (Public preview) | X | | | | Automation Update Management | | X | |+| | Azure Stack HCI | X | | | | | Update Management Center | N/A (Public preview, independent of monitoring agents) | | | | | Change Tracking | X (Public preview) | X | | | | SQL Best Practices Assessment | X | | | View [supported operating systems for Azure Arc Connected Machine agent](../../a | Windows 10 Enterprise<br>(including multi-session) and Pro<br>(Server scenarios only<sup>1</sup>) | X | X | X | | Windows 8 Enterprise and Pro<br>(Server scenarios only<sup>1</sup>) | | X | | | Windows 7 SP1<br>(Server scenarios only<sup>1</sup>) | | X | |-| Azure Stack HCI | | X | | +| Azure Stack HCI | X | X | | <sup>1</sup> Running the OS on server hardware, for example, machines that are always connected, always turned on, and not running other workloads (PC, office, browser).<br> <sup>2</sup> Using the Azure Monitor agent [client installer](./azure-monitor-agent-windows-client.md).<br> |
azure-monitor | Azure Monitor Agent Extension Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md | We strongly recommended to update to the latest version at all times, or opt in ## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|-| Apr 2023 | **Windows** <ul><li>AMA: Enable Large Event support based on Region.</li><li>AMA: Upgrade to FluentBit version 2.0.9</li><li>Update Troubleshooter to 1.3.1</li><li>Update ME version to 2.2023.331.1521</li><li>Updating package version for AzSecPack 4.26 release</li></ul>|1.15.0.0| Comming soon| +| May 2023 | **Windows** <ul><li>Enable Large Event support for all regions.</li><li>Update to TroubleShooter 1.4.0.</li><li>Fixed issue when Event Log subscription become invalid; will resubscribe.</li><li>AMA: Fixed issue with Large Event sending too large data. Also affecting Custom Log.</li></ul> **Linux** <ul><li>Support for CIS and SELinux [hardening](https://learn.microsoft.com/azure/azure-monitor/agents/agents-overview#linux-hardening-standards)</li><li>Include Ubuntu 22.04 (jammy) in azure-mdsd package publishing</li><li>Move storage SDK patch to build container</li><li>Add system telegraf counters to AMA</li><li>Drop msgpack and syslog data if not configured in active configuration</li><li>Limit the events sent to Public ingestion pipeline</li><li>**Fixes** <ul><li>Fix mdsd crash in init when in persistent mode </li><li>Remove FdClosers from ProtocolListeners to avoid a race condition</li><li>Fix sed regex special character escaping issue in rpm macro for Centos 7.3.Maipo</li><li>Fix latency and future timestamp issue for 3P</li><li>Install AMA syslog configs only if customer is opted in for syslog in DCR</li><li>Fix heartbeat time check</li><li>Skip unnecessary cleanup in fatal signal handler</li><li>Fix case where fast-forwarding may cause intervals to be skipped</li><li>Fix comma separated custom log paths with fluent</li></ul></li><ul> | 1.16.0 | 1.26.2 | +| Apr 2023 | **Windows** <ul><li>AMA: Enable Large Event support based on Region.</li><li>AMA: Upgrade to FluentBit version 2.0.9</li><li>Update Troubleshooter to 1.3.1</li><li>Update ME version to 2.2023.331.1521</li><li>Updating package version for AzSecPack 4.26 release</li></ul>|1.15.0.0| Coming soon| | Mar 2023 | **Windows** <ul><li>Text file collection improvements to handle high rate of logging and for continuous tailing in case of longer lines</li><li>VM Insights fixes for collecting metrics from non-English OS</li></ul> | 1.14.0.0 | Coming soon | | Feb 2023 | <ul><li>**Linux (hotfix)** Resolved potential data loss due to "Bad file descriptor" errors seen in the mdsd error log with previous version. Please upgrade to hotfix version</li><li>**Windows** Reliability improvements in fluentbit buffering to handle larger text files</li></ul> | 1.13.1.0 | 1.25.2<sup>Hotfix</sup> | | Jan 2023 | **Linux** <ul><li>RHEL 9 and Amazon Linux 2 support</li><li>Update to OpenSSL 1.1.1s and require TLS 1.2 or higher</li><li>Performance improvements</li><li>Improvements in Garbage Collection for persisted disk cache and handling corrupted cache files better</li><li>**Fixes** <ul><li>Set agent service memory limit for CentOS/RedHat 7 distros. Resolved MemoryMax parsing error</li><li>Fixed modifying rsyslog system-wide log format caused by installer on RedHat/Centos 7.3</li><li>Fixed permissions to config directory</li><li>Installation reliability improvements</li><li>Fixed permissions on default file so rpm verification doesn't fail</li><li>Added traceFlags setting to enable trace logs for agent</li></ul></li></ul> **Windows** <ul><li>Fixed issue related to incorrect *EventLevel* and *Task* values for Log Analytics *Event* table, to match Windows Event Viewer values</li><li>Added missing columns for IIS logs - *TimeGenerated, Time, Date, Computer, SourceSystem, AMA, W3SVC, SiteName*</li><li>Reliability improvements for metrics collection</li><li>Fixed machine restart issues on for Arc-enabled servers related to repeated calls to HIMDS service</li></ul> | 1.12.0.0 | 1.25.1 | | Nov-Dec 2022 | <ul><li>Support for air-gapped clouds added for [Windows MSI installer for clients](./azure-monitor-agent-windows-client.md) </li><li>Reliability improvements for using AMA with Custom Metrics destination</li><li>Performance and internal logging improvements</li></ul> | 1.11.0.0 | None | -| Oct 2022 | **Windows** <ul><li>Increased reliability of data uploads</li><li>Data quality improvements</li></ul> **Linux** <ul><li>Support for `http_proxy` and `https_proxy` environment variables for [network proxy configurations](./azure-monitor-agent-data-collection-endpoint.md#proxy-configuration) for the agent</li><li>[Text logs](./data-collection-text-log.md) <ul><li>Network proxy support enabled</li><li>Fixed missing `_ResourceId`</li><li>Increased maximum line size support to 1MB</li></ul></li><li>Support ingestion of syslog events whose timestamp is in the future</li><li>Performance improvements</li><li>Fixed `diskio` metrics instance name dimension to use the disk mount path(s) instead of the device name(s)</li><li>Fixed world writable file issue to lockdown write access to certain agent logs and configuration files stored locally on the machine</li></ul> | 1.10.0.0 | 1.24.2 | +| Oct 2022 | **Windows** <ul><li>Increased reliability of data uploads</li><li>Data quality improvements</li></ul> **Linux** <ul><li>Support for `http_proxy` and `https_proxy` environment variables for [network proxy configurations](./azure-monitor-agent-data-collection-endpoint.md#proxy-configuration) for the agent</li><li>[Text logs](./data-collection-text-log.md) <ul><li>Network proxy support enabled</li><li>Fixed missing `_ResourceId`</li><li>Increased maximum line size support to 1MB</li></ul></li><li>Support ingestion of syslog events whose timestamp is in the future</li><li>Performance improvements</li><li>Fixed `diskio` metrics instance name dimension to use the disk mount path(s) instead of the device name(s)</li><li>Fixed world writable file issue to lock down write access to certain agent logs and configuration files stored locally on the machine</li></ul> | 1.10.0.0 | 1.24.2 | | Sep 2022 | Reliability improvements | 1.9.0.0 | None | | August 2022 | **Common updates** <ul><li>Improved resiliency: Default lookback (retry) time updated to last 3 days (72 hours) up from 60 minutes, for agent to collect data post interruption. This is subject to default offline cache size of 10gigabytes</li><li>Fixes the preview custom text log feature that was incorrectly removing the *TimeGenerated* field from the raw data of each event. All events are now additionally stamped with agent (local) upload time</li><li>Reliability and supportability improvements</li></ul> **Windows** <ul><li>Fixed datetime format to UTC</li><li>Fix to use default location for firewall log collection, if not provided</li><li>Reliability and supportability improvements</li></ul> **Linux** <ul><li>Support for OpenSuse 15, Debian 11 ARM64</li><li>Support for coexistence of Azure Monitor agent with legacy Azure Diagnostic extension for Linux (LAD)</li><li>Increased max-size of UDP payload for Telegraf output to prevent dimension truncation</li><li>Prevent unconfigured upload to Azure Monitor Metrics destination</li><li>Fix for disk metrics wherein *instance name* dimension will use the disk mount path(s) instead of the device name(s), to provide parity with legacy agent</li><li>Fixed *disk free MB* metric to report megabytes instead of bytes</li></ul> | 1.8.0.0 | 1.22.2 | | July 2022 | Fix for mismatch event timestamps for Sentinel Windows Event Forwarding | 1.7.0.0 | None |-| June 2022 | Bugfixes with user assigned identity support, and reliability improvements | 1.6.0.0 | None | +| June 2022 | Bug fixes with user assigned identity support, and reliability improvements | 1.6.0.0 | None | | May 2022 | <ul><li>Fixed issue where agent stops functioning due to faulty XPath query. With this version, only query related Windows events will fail, other data types will continue to be collected</li><li>Collection of Windows network troubleshooting logs added to 'CollectAMAlogs.ps1' tool</li><li>Linux support for Debian 11 distro</li><li>Fixed issue to list mount paths instead of device names for Linux disk metrics</li></ul> | 1.5.0.0 | 1.21.0 | | April 2022 | <ul><li>Private IP information added in Log Analytics <i>Heartbeat</i> table for Windows and Linux</li><li>Fixed bugs in Windows IIS log collection (preview) <ul><li>Updated IIS site column name to match backend KQL transform</li><li>Added delay to IIS upload task to account for IIS buffering</li></ul></li><li>Fixed Linux CEF syslog forwarding for Sentinel</li><li>Removed 'error' message for Azure MSI token retrieval failure on Arc to show as 'Info' instead</li><li>Support added for Ubuntu 22.04, RHEL 8.5, 8.6, AlmaLinux and RockyLinux distros</li></ul> | 1.4.1.0<sup>Hotfix</sup> | 1.19.3 | | March 2022 | <ul><li>Fixed timestamp and XML format bugs in Windows Event logs</li><li>Full Windows OS information in Log Analytics Heartbeat table</li><li>Fixed Linux performance counters to collect instance values instead of 'total' only</li></ul> | 1.3.0.0 | 1.17.5.0 |-| February 2022 | <ul><li>Bugfixes for the AMA Client installer (private preview)</li><li>Versioning fix to reflect appropriate Windows major/minor/hotfix versions</li><li>Internal test improvement on Linux</li></ul> | 1.2.0.0 | 1.15.3 | +| February 2022 | <ul><li>Bug fixes for the AMA Client installer (private preview)</li><li>Versioning fix to reflect appropriate Windows major/minor/hotfix versions</li><li>Internal test improvement on Linux</li></ul> | 1.2.0.0 | 1.15.3 | | January 2022 | <ul><li>Syslog RFC compliance for Linux</li><li>Fixed issue for Linux perf counters not flowing on restart</li><li>Fixed installation failure on Windows Server 2008 R2 SP1</li></ul> | 1.1.5.1<sup>Hotfix</sup> | 1.15.2.0<sup>Hotfix</sup> | | December 2021 | <ul><li>Fixed issues impacting Linux Arc-enabled servers</li><li>'Heartbeat' table > 'Category' column reports "Azure Monitor Agent" in Log Analytics for Windows</li></ul> | 1.1.4.0 | 1.14.7.0<sup>2</sup> | | September 2021 | <ul><li>Fixed issue causing data loss on restarting the agent</li><li>Fixed issue for Arc Windows servers</li></ul> | 1.1.3.2<sup>Hotfix</sup> | 1.12.2.0 <sup>1</sup> | |
azure-monitor | Use Azure Monitor Agent Troubleshooter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/use-azure-monitor-agent-troubleshooter.md | The Azure Monitor Agent isn't a service that runs in the context of an Azure Res ### Run Windows Troubleshooter 1. Log in to the machine to be diagnosed 2. Go to the location where the troubleshooter is automatically installed: C:/Packages/Plugins/Microsoft.Azure.Monitor.AzureMonitorWindowsAgent/{version}/Troubleshooter-3. Run the Troubleshooter: > Troubleshooter --ama +3. Run the Troubleshooter: > AgentTroubleshooter --ama ### Evaluate the Windows Results The Troubleshooter runs two tests and collects several diagnostic logs. |
azure-monitor | Alerts Create New Alert Rule | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md | Alerts triggered by these alert rules contain a payload that uses the [common al This example creates an "Additional Details" tag with data refarding the "window start time" and "window end time". - **Name:** "Additional Details"- - **Value:** "Evaluation windowStartTime: \${data.alertContext.condition.windowStartTime}. windowEndTime: \${data.alertContext.condition.windowEndTime}" + - **Value:** "Evaluation windowStartTime: \${data.context.condition.windowStartTime}. windowEndTime: \${data.context.condition.windowEndTime}" - **Result:** "AdditionalDetails:Evaluation windowStartTime: 2023-04-04T14:39:24.492Z. windowEndTime: 2023-04-04T14:44:24.492Z" Alerts triggered by these alert rules contain a payload that uses the [common al This example add the data regarding the reason of resolving or firing the alert. - **Name:** "Alert \${data.essentials.monitorCondition} reason"- - **Value:** "\${data.alertContext.condition.allOf[0].metricName} \${data.alertContext.condition.allOf[0].operator} \${data.alertContext.condition.allOf[0].threshold} \${data.essentials.monitorCondition}. The value is \${data.alertContext.condition.allOf[0].metricValue}" + - **Value:** "\${data.context.condition.allOf[0].metricName} \${data.context.condition.allOf[0].operator} \${data.context.condition.allOf[0].threshold} \${data.essentials.monitorCondition}. The value is \${data.context.condition.allOf[0].metricValue}" - **Result:** Example results could be something like: - "Alert Resolved reason: Percentage CPU GreaterThan5 Resolved. The value is 3.585" - ΓÇ£Alert Fired reason": "Percentage CPU GreaterThan5 Fired. The value is 10.585" |
azure-monitor | Alerts Log Webhook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-webhook.md | The following sample payload is for a standard webhook when it's used for log al The following sample payload is for a standard webhook action that's used for alerts based on Log Analytics: > [!NOTE]-> The `"Severity"` field value changes if you've [switched to the current scheduledQueryRules API](/previous-versions/azure/azure-monitor/alerts/alerts-log-api-switch) from the [legacy Log Analytics Alert API](./api-alerts.md). +> The `"Severity"` field value changes if you've [switched to the current scheduledQueryRules API](./alerts-log-api-switch.md) from the [legacy Log Analytics Alert API](./api-alerts.md). ```json { |
azure-monitor | Alerts Manage Alerts Previous Version | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alerts-previous-version.md | Use the following PowerShell cmdlets to manage rules with the [Scheduled Query R - [Remove-AzScheduledQueryRule](/powershell/module/az.monitor/remove-azscheduledqueryrule): PowerShell cmdlet to delete an existing log alert rule. > [!NOTE]-> The `ScheduledQueryRules` PowerShell cmdlets can only manage rules created in [this version of the Scheduled Query Rules API](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules). Log alert rules created by using the legacy [Log Analytics Alert API](./api-alerts.md) can only be managed by using PowerShell after you [switch to the Scheduled Query Rules API](/previous-versions/azure/azure-monitor/alerts/alerts-log-api-switch). +> The `ScheduledQueryRules` PowerShell cmdlets can only manage rules created in [this version of the Scheduled Query Rules API](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules). Log alert rules created by using the legacy [Log Analytics Alert API](./api-alerts.md) can only be managed by using PowerShell after you [switch to the Scheduled Query Rules API](./alerts-log-api-switch.md). Example steps for creating a log alert rule by using PowerShell: |
azure-monitor | Azure Web Apps Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-java.md | To manually update, follow these steps: :::image type="content"source="./media/azure-web-apps/startup-command.png" alt-text="Screenshot of startup command."::: - **Startup Command** won't honor `JAVA_OPTS`. + **Startup Command** doesn't honor `JAVA_OPTS` for JavaSE or `CATALINA_OPTS` for Tomcat. - If you don't use **Startup Command**, create a new environment variable, `JAVA_OPTS`, with the value + If you don't use **Startup Command**, create a new environment variable, `JAVA_OPTS` for JavaSE or `CATALINA_OPTS` for Tomcat, with the value `-javaagent:{PATH_TO_THE_AGENT_JAR}/applicationinsights-agent-{VERSION_NUMBER}.jar`. 4. Restart the app to apply the changes. > [!NOTE]-> If you set the JAVA_OPTS environment variable, you will have to disable Application Insights in the portal. Alternatively, if you prefer to enable Application Insights from the portal, make sure that you don't set the `JAVA_OPTS` variable in App Service configurations settings. +> If you set the `JAVA_OPTS` for JavaSE or `CATALINA_OPTS` for Tomcat environment variable, you will have to disable Application Insights in the portal. Alternatively, if you prefer to enable Application Insights from the portal, make sure that you don't set the `JAVA_OPTS` for JavaSE or `CATALINA_OPTS` for Tomcat variable in App Service configurations settings. ## Release notes |
azure-monitor | Migrate From Instrumentation Keys To Connection Strings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/migrate-from-instrumentation-keys-to-connection-strings.md | This article walks you through migrating from [instrumentation keys](separate-re 1. Configure the Application Insights SDK by following [How to set connection strings](sdk-connection-string.md#set-a-connection-string). > [!IMPORTANT]-> Using both a connection string and instrumentation key isn't recommended. Whichever was set last takes precedence. +> Using both a connection string and instrumentation key isn't recommended. Whichever was set last takes precedence. Also, using both could lead to [missing data](#missing-data). ## Migration at scale |
azure-monitor | Sampling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md | Adaptive sampling is enabled by default for all ASP.NET Core applications. You c The default sampling feature can be disabled while adding the Application Insights service. -### [ASP.NET Core 6 and later](#tab/net-core-new) - Add `ApplicationInsightsServiceOptions` after the `WebApplication.CreateBuilder()` method in the `Program.cs` file: ```csharp builder.Services.AddApplicationInsightsTelemetry(aiOptions); var app = builder.Build(); ``` -### [ASP.NET Core 5 and earlier](#tab/net-core-old) --Add `ApplicationInsightsServiceOptions` to the `ConfigureServices()` method in the `Startup.cs` file: --```csharp -public void ConfigureServices(IServiceCollection services) -{ - var aiOptions = new Microsoft.ApplicationInsights.AspNetCore.Extensions.ApplicationInsightsServiceOptions(); - aiOptions.EnableAdaptiveSampling = false; - services.AddApplicationInsightsTelemetry(aiOptions); -} -``` --- The above code disables adaptive sampling. Follow the steps below to add sampling with more customization options. #### Configure sampling settings Use extension methods of `TelemetryProcessorChainBuilder` as shown below to cust > [!IMPORTANT] > If you use this method to configure sampling, please make sure to set the `aiOptions.EnableAdaptiveSampling` property to `false` when calling `AddApplicationInsightsTelemetry()`. After making this change, you then need to follow the instructions in the code block below **exactly** in order to re-enable adaptive sampling with your customizations in place. Failure to do so can result in excess data ingestion. Always test post changing sampling settings, and set an appropriate [daily data cap](../logs/daily-cap.md) to help control your costs. -### [ASP.NET Core 6 and later](#tab/net-core-new) - ```csharp using Microsoft.ApplicationInsights.AspNetCore.Extensions; using Microsoft.ApplicationInsights.Extensibility; var builder = WebApplication.CreateBuilder(args); builder.Services.Configure<TelemetryConfiguration>(telemetryConfiguration => {- var builder = telemetryConfiguration.DefaultTelemetrySink.TelemetryProcessorChainBuilder; -- // Using adaptive sampling - builder.UseAdaptiveSampling(maxTelemetryItemsPerSecond: 5); -- // Alternately, the following configures adaptive sampling with 5 items per second, and also excludes DependencyTelemetry from being subject to sampling: - // configuration.DefaultTelemetrySink.TelemetryProcessorChainBuilder.UseAdaptiveSampling(maxTelemetryItemsPerSecond:5, excludedTypes: "Dependency"); + var telemetryProcessorChainBuilder = telemetryConfiguration.DefaultTelemetrySink.TelemetryProcessorChainBuilder; - // If you have other telemetry processors: - builder.Use(next => new AnotherProcessor(next)); + // Using adaptive sampling + telemetryProcessorChainBuilder.UseAdaptiveSampling(maxTelemetryItemsPerSecond: 5); + // Alternately, the following configures adaptive sampling with 5 items per second, and also excludes DependencyTelemetry from being subject to sampling: + // telemetryProcessorChainBuilder.UseAdaptiveSampling(maxTelemetryItemsPerSecond:5, excludedTypes: "Dependency"); }); builder.Services.AddApplicationInsightsTelemetry(new ApplicationInsightsServiceOptions {- EnableAdaptiveSampling = false, + EnableAdaptiveSampling = false, }); var app = builder.Build(); ``` -### [ASP.NET Core 5 and earlier](#tab/net-core-old) --```csharp -using Microsoft.ApplicationInsights.Extensibility --public void Configure(IApplicationBuilder app, IHostingEnvironment env, TelemetryConfiguration configuration) -{ - var builder = configuration.DefaultTelemetrySink.TelemetryProcessorChainBuilder; - // For older versions of the Application Insights SDK, use the following line instead: - // var builder = configuration.TelemetryProcessorChainBuilder; -- // Using adaptive sampling - builder.UseAdaptiveSampling(maxTelemetryItemsPerSecond:5); -- // Alternately, the following configures adaptive sampling with 5 items per second, and also excludes DependencyTelemetry from being subject to sampling. - // builder.UseAdaptiveSampling(maxTelemetryItemsPerSecond:5, excludedTypes: "Dependency"); -- // If you have other telemetry processors: - builder.Use((next) => new AnotherProcessor(next)); - - builder.Build(); -} -``` --- ### Configuring adaptive sampling for Azure Functions Follow instructions from [this page](../../azure-functions/configure-monitoring.md#configure-sampling) to configure adaptive sampling for apps running in Azure Functions. In Metrics Explorer, rates such as request and exception counts are multiplied b ### Configuring fixed-rate sampling for ASP.NET Core applications 1. **Disable adaptive sampling**-- ### [ASP.NET Core 6 and later](#tab/net-core-new) Changes can be made after the `WebApplication.CreateBuilder()` method, using `ApplicationInsightsServiceOptions`: In Metrics Explorer, rates such as request and exception counts are multiplied b var app = builder.Build(); ``` - ### [ASP.NET Core 5 and earlier](#tab/net-core-old) - - Changes can be made in the `ConfigureServices()` method, using `ApplicationInsightsServiceOptions`: -- ```csharp - public void ConfigureServices(IServiceCollection services) - { - var aiOptions = new Microsoft.ApplicationInsights.AspNetCore.Extensions.ApplicationInsightsServiceOptions(); - aiOptions.EnableAdaptiveSampling = false; - services.AddApplicationInsightsTelemetry(aiOptions); - } - ``` - - - 1. **Enable the fixed-rate sampling module**-- ### [ASP.NET Core 6 and later](#tab/net-core-new) Changes can be made after the `WebApplication.CreateBuilder()` method: In Metrics Explorer, rates such as request and exception counts are multiplied b var app = builder.Build(); ``` - ### [ASP.NET Core 5 and earlier](#tab/net-core-old) - - Changes can be made in the `Configure()` method: -- ```csharp - public void Configure(IApplicationBuilder app, IHostingEnvironment env) - { - var configuration = app.ApplicationServices.GetService<TelemetryConfiguration>(); -- var builder = configuration.DefaultTelemetrySink.TelemetryProcessorChainBuilder; - // For older versions of the Application Insights SDK, use the following line instead: - // var builder = configuration.TelemetryProcessorChainBuilder; -- // Using fixed rate sampling - double fixedSamplingPercentage = 10; - builder.UseSampling(fixedSamplingPercentage); -- builder.Build(); - } - ``` -- - ### Configuring sampling overrides and fixed-rate sampling for Java applications By default no sampling is enabled in the Java auto-instrumentation and SDK. Currently the Java auto-instrumentation, [sampling overrides](./java-standalone-sampling-overrides.md) and fixed rate sampling are supported. Adaptive sampling isn't supported in Java. |
azure-monitor | Worker Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/worker-service.md | The Application Insights SDK for Worker Service supports both [fixed-rate sampli To configure other sampling settings, you can use the following example: ```csharp+using Microsoft.ApplicationInsights.AspNetCore.Extensions; using Microsoft.ApplicationInsights.Extensibility;-using Microsoft.ApplicationInsights.WorkerService; -public void ConfigureServices(IServiceCollection services) +var builder = WebApplication.CreateBuilder(args); ++builder.Services.Configure<TelemetryConfiguration>(telemetryConfiguration => {- // ... + var telemetryProcessorChainBuilder = telemetryConfiguration.DefaultTelemetrySink.TelemetryProcessorChainBuilder; - var aiOptions = new ApplicationInsightsServiceOptions(); - - // Disable adaptive sampling. - aiOptions.EnableAdaptiveSampling = false; - services.AddApplicationInsightsTelemetryWorkerService(aiOptions); + // Using adaptive sampling + telemetryProcessorChainBuilder.UseAdaptiveSampling(maxTelemetryItemsPerSecond: 5); - // Add Adaptive Sampling with custom settings. - // The following adds adaptive sampling with 15 items per sec. - services.Configure<TelemetryConfiguration>((telemetryConfig) => - { - var builder = telemetryConfig.DefaultTelemetrySink.TelemetryProcessorChainBuilder; - builder.UseAdaptiveSampling(maxTelemetryItemsPerSecond: 15); - builder.Build(); - }); - //... -} + // Alternately, the following configures adaptive sampling with 5 items per second, and also excludes DependencyTelemetry from being subject to sampling: + // telemetryProcessorChainBuilder.UseAdaptiveSampling(maxTelemetryItemsPerSecond:5, excludedTypes: "Dependency"); +}); ++builder.Services.AddApplicationInsightsTelemetry(new ApplicationInsightsServiceOptions +{ + EnableAdaptiveSampling = false, +}); ++var app = builder.Build(); ``` For more information, see the [Sampling](#sampling) document. |
azure-monitor | Best Practices Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-alerts.md | description: Recommendations for deployment of Azure Monitor alerts and automate Previously updated : 10/18/2021 Last updated : 05/31/2023 |
azure-monitor | Best Practices Analysis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-analysis.md | description: Guidance and recommendations for customizing visualizations beyond Previously updated : 02/14/2023 Last updated : 05/31/2023 |
azure-monitor | Best Practices Data Collection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-data-collection.md | description: Guidance and recommendations for configuring data collection in Azu Previously updated : 10/18/2021 Last updated : 05/31/2023 |
azure-monitor | Best Practices Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-plan.md | description: Guidance and recommendations for planning and design before deployi Previously updated : 10/18/2021 Last updated : 05/31/2023 |
azure-monitor | Azure Monitor Workspace Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/azure-monitor-workspace-overview.md | See [Azure Monitor service limits](../service-limits.md#prometheus-metrics) for - Azure Monitor workspaces are currently only supported in public clouds. - Azure Monitor workspaces don't currently support being moved into a different subscription or resource group once created. +## Data considerations +Data stored in the Azure Monitor Workspace is handled in accordance with all standards described in the [Azure Trust Center](https://www.microsoft.com/en-us/trust-center?rtc=1). Several considerations exist specific to data stored in the Azure Monitor Workspace: +- Data is physically stored in the same region that the Azure Monitor Workspace is provisioned in +- Data is encrypted at rest using a Microsoft-managed key +- Data is retained for 18 months +- For details about the Azure Monitor managed service for Prometheus' support of PII/EUII data, please see details [here](./prometheus-metrics-overview.md) ## Next steps |
azure-monitor | Integrate Keda | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/integrate-keda.md | KEDA is a Kubernetes-based Event Driven Autoscaler. KEDA lets you drive the scal To integrate KEDA into your Azure Kubernetes Service, you have to deploy and configure a workload identity or pod identity on your cluster. The identity allows KEDA to authenticate with Azure and retrieve metrics for scaling from your Monitor workspace. This article walks you through the steps to integrate KEDA into your AKS cluster using a workload identity.- Note > [!NOTE] > We recommend using Azure Active Directory workload identity. This authentication method replaces pod-managed identity (preview), which integrates with the Kubernetes native capabilities to federate with any external identity providers on behalf of the application. > > The open source Azure AD pod-managed identity (preview) in Azure Kubernetes Service has been deprecated as of 10/24/2022, and the project will be archived in Sept. 2023. For more information, see the deprecation notice. The AKS Managed add-on begins deprecation in Sept. 2023.+> +> Azure Managed Prometheus support starts from KEDA v2.10. If you have an older version of KEDA installed, you must upgrade in order to work with Azure Managed Prometheus. ## Prerequisites |
azure-monitor | Metrics Supported | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md | -Date list was last updated: 05/28/2023. +Date list was last updated: 06/01/2023. Azure Monitor provides several ways to interact with metrics, including charting them in the Azure portal, accessing them through the REST API, or querying them by using PowerShell or the Azure CLI (Command Line Interface). This latest update adds a new column and reorders the metrics to be alphabetical - [Export metrics to storage, Event Hub, or Log Analytics](../essentials/platform-logs-overview.md) -<!--Gen Date: Sun May 28 2023 17:43:46 GMT+0300 (Israel Daylight Time)--> +<!--Gen Date: Thu Jun 01 2023 09:57:38 GMT+0300 (Israel Daylight Time)--> |
azure-monitor | Prometheus Metrics Scrape Default | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-scrape-default.md | The following metrics are collected by default from each default target. All oth - `node_uname_info"` **kube-state-metrics (job=kube-state-metrics)**<br>- - `kube_node_status_capacity` - `kube_job_status_succeeded` - `kube_job_spec_completions` - `kube_daemonset_status_desired_number_scheduled` - `kube_daemonset_status_number_ready`- - `kube_deployment_spec_replicas` - `kube_deployment_status_replicas_ready` - `kube_pod_container_status_last_terminated_reason`- - `kube_node_status_condition` + - `kube_pod_container_status_waiting_reason` - `kube_pod_container_status_restarts_total`+ - `kube_node_status_allocatable` + - `kube_pod_owner` - `kube_pod_container_resource_requests` - `kube_pod_status_phase` - `kube_pod_container_resource_limits`- - `kube_node_status_allocatable` - - `kube_pod_info` - - `kube_pod_owner` + - `kube_replicaset_owner` - `kube_resourcequota`- - `kube_statefulset_replicas` - - `kube_statefulset_status_replicas` - - `kube_statefulset_status_replicas_ready` - - `kube_statefulset_status_replicas_current` - - `kube_statefulset_status_replicas_updated` - `kube_namespace_status_phase`+ - `kube_node_status_capacity` - `kube_node_info`- - `kube_statefulset_metadata_generation` - - `kube_pod_labels` - - `kube_pod_annotations` - - `kube_horizontalpodautoscaler_status_current_replicas` - - `kube_horizontalpodautoscaler_status_desired_replicas` - - `kube_horizontalpodautoscaler_spec_min_replicas` - - `kube_horizontalpodautoscaler_spec_max_replicas` - - `kube_node_status_condition` - - `kube_node_spec_taint` - - `kube_pod_container_status_waiting_reason` - - `kube_job_failed` - - `kube_job_status_start_time` + - `kube_pod_info` - `kube_deployment_spec_replicas` - `kube_deployment_status_replicas_available` - `kube_deployment_status_replicas_updated`+ - `kube_statefulset_status_replicas_ready` + - `kube_statefulset_status_replicas` + - `kube_statefulset_status_replicas_updated` + - `kube_job_status_start_time` - `kube_job_status_active`+ - `kube_job_failed` + - `kube_horizontalpodautoscaler_status_desired_replicas` + - `kube_horizontalpodautoscaler_status_current_replicas` + - `kube_horizontalpodautoscaler_spec_min_replicas` + - `kube_horizontalpodautoscaler_spec_max_replicas` - `kubernetes_build_info`+ - `kube_node_status_condition` + - `kube_node_spec_taint` - `kube_pod_container_info`- - `kube_replicaset_owner` ## Default targets scraped for Windows Following Windows targets are configured to scrape, but scraping is not enabled (**disabled/OFF**) by default - meaning you don't have to provide any scrape job configuration for scraping these targets but they are disabled/OFF by default and you need to turn ON/enable scraping for these targets using [ama-metrics-settings-configmap](https://aka.ms/azureprometheus-addon-settings-configmap) under `default-scrape-settings-enabled` section |
azure-monitor | Prometheus Metrics Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-troubleshoot.md | Replica pod scrapes metrics from `kube-state-metrics` and custom scrape targets ## Metrics Throttling -In the Azure portal, navigate to your Azure Monitor Workspace. Go to `Metrics` and verify that the metrics `Active Time Series % Utilization` and `Events Per Minuted Ingested % Utilization` are below 100%. +In the Azure portal, navigate to your Azure Monitor Workspace. Go to `Metrics` and verify that the metrics `Active Time Series % Utilization` and `Events Per Minute Ingested % Utilization` are below 100%. :::image type="content" source="media/prometheus-metrics-troubleshoot/throttling.png" alt-text="Screenshot showing how to navigate to the throttling metrics." lightbox="media/prometheus-metrics-troubleshoot/throttling.png"::: |
azure-monitor | Resource Logs Categories | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-categories.md | Title: Supported categories for Azure Monitor resource logs description: Understand the supported services and event schemas for Azure Monitor resource logs. Previously updated : 05/28/2023 Last updated : 06/01/2023 If you think something is missing, you can open a GitHub comment at the bottom o * [Analyze logs from Azure storage with Log Analytics](./resource-logs.md#send-to-log-analytics-workspace) -<!--Gen Date: Sun May 28 2023 17:43:46 GMT+0300 (Israel Daylight Time)--> +<!--Gen Date: Thu Jun 01 2023 09:57:38 GMT+0300 (Israel Daylight Time)--> |
azure-monitor | Getting Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/getting-started.md | description: Guidance and recommendations for deploying Azure Monitor. Previously updated : 10/18/2021 Last updated : 05/31/2023 |
azure-monitor | Scom Assessment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/scom-assessment.md | description: You can use the System Center Operations Manager Health Check solut Previously updated : 06/25/2018 Last updated : 05/31/2023 |
azure-monitor | Basic Logs Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md | Configure a table for Basic logs if: | Container Apps | [ContainerAppConsoleLogs](/azure/azure-monitor/reference/tables/containerappconsoleLogs) | | Container Insights | [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2) | | Container Apps Environments | [AppEnvSpringAppConsoleLogs](/azure/azure-monitor/reference/tables/AppEnvSpringAppConsoleLogs) |- | Communication Services | [ACSCallAutomationIncomingOperations](/azure/azure-monitor/reference/tables/ACSCallAutomationIncomingOperations)<br>[ACSCallAutomationMediaSummary](/azure/azure-monitor/reference/tables/ACSCallAutomationMediaSummary)<br>[ACSCallRecordingSummary](/azure/azure-monitor/reference/tables/ACSCallRecordingSummary)<br>[ACSRoomsIncomingOperations](/azure/azure-monitor/reference/tables/acsroomsincomingoperations) | + | Communication Services | [ACSCallAutomationIncomingOperations](/azure/azure-monitor/reference/tables/ACSCallAutomationIncomingOperations)<br>[ACSCallAutomationMediaSummary](/azure/azure-monitor/reference/tables/ACSCallAutomationMediaSummary)<br>[ACSCallRecordingIncomingOperations](/azure/azure-monitor/reference/tables/ACSCallRecordingIncomingOperations)<br>[ACSCallRecordingSummary](/azure/azure-monitor/reference/tables/ACSCallRecordingSummary)<br>[ACSRoomsIncomingOperations](/azure/azure-monitor/reference/tables/acsroomsincomingoperations) | | Confidential Ledgers | [CCFApplicationLogs](/azure/azure-monitor/reference/tables/CCFApplicationLogs) | | Custom tables | All custom tables created with or migrated to the [data collection rule (DCR)-based logs ingestion API.](logs-ingestion-api-overview.md) | | Data Manager for Energy | [OEPDataplaneLogs](/azure/azure-monitor/reference/tables/OEPDataplaneLogs) | | Dedicated SQL Pool | [SynapseSqlPoolSqlRequests](/azure/azure-monitor/reference/tables/synapsesqlpoolsqlrequests)<br>[SynapseSqlPoolRequestSteps](/azure/azure-monitor/reference/tables/synapsesqlpoolrequeststeps)<br>[SynapseSqlPoolExecRequests](/azure/azure-monitor/reference/tables/synapsesqlpoolexecrequests)<br>[SynapseSqlPoolDmsWorkers](/azure/azure-monitor/reference/tables/synapsesqlpooldmsworkers)<br>[SynapseSqlPoolWaits](/azure/azure-monitor/reference/tables/synapsesqlpoolwaits) | | Dev Center | [DevCenterDiagnosticLogs](/azure/azure-monitor/reference/tables/DevCenterDiagnosticLogs) |+ | Data Transfer | [DataTransferOperations](/azure/azure-monitor/reference/tables/DataTransferOperations) | | Firewalls | [AZFWFlowTrace](/azure/azure-monitor/reference/tables/AZFWFlowTrace) | | Health Data | [AHDSMedTechDiagnosticLogs](/azure/azure-monitor/reference/tables/AHDSMedTechDiagnosticLogs) | | Kubernetes services | [AKSAudit](/azure/azure-monitor/reference/tables/AKSAudit)<br>[AKSAuditAdmin](/azure/azure-monitor/reference/tables/AKSAuditAdmin)<br>[AKSControlPlane](/azure/azure-monitor/reference/tables/AKSControlPlane) | |
azure-monitor | Private Link Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-link-configure.md | In this section, we review the step-by-step process of setting up a private link 1. Go to **Create a resource** in the Azure portal and search for **Azure Monitor Private Link Scope**. -  + :::image type="content" source="./media/private-link-security/ampls-find-1c.png" lightbox="./media/private-link-security/ampls-find-1c.png" alt-text="Screenshot showing finding Azure Monitor Private Link Scope."::: 1. Select **Create**. 1. Select a subscription and resource group. In this section, we review the step-by-step process of setting up a private link ### Connect Azure Monitor resources -Connect Azure Monitor resources like Log Analytics workspaces, Application Insights components, and [data collection endpoints](../essentials/data-collection-endpoint-overview.md)) to your AMPLS. +Connect Azure Monitor resources like Log Analytics workspaces, Application Insights components, and [data collection endpoints](../essentials/data-collection-endpoint-overview.md)) to your Azure Monitor Private Link Scope (AMPLS). 1. In your AMPLS, select **Azure Monitor Resources** in the menu on the left. Select **Add**. 1. Add the workspace or component. Selecting **Add** opens a dialog where you can select Azure Monitor resources. You can browse through your subscriptions and resource groups. You can also enter their names to filter down to them. Select the workspace or component and select **Apply** to add them to your scope. Now that you have resources connected to your AMPLS, create a private endpoint t 1. In your scope resource, select **Private Endpoint connections** from the resource menu on the left. Select **Private Endpoint** to start the endpoint creation process. You can also approve connections that were started in the Private Link Center here by selecting them and selecting **Approve**. - :::image type="content" source="./media/private-link-security/ampls-select-private-endpoint-connect-3.png" alt-text="Screenshot that shows Private Endpoint connections." lightbox="./media/private-link-security/ampls-select-private-endpoint-connect-3.png"::: + :::image type="content" source="./media/private-link-security/ampls-select-private-endpoint-connect-3.png" lightbox="./media/private-link-security/ampls-select-private-endpoint-connect-3.png" alt-text="Screenshot that shows Private Endpoint connections."::: -1. Select the subscription, resource group, name of the endpoint, and the region it should live in. The region must be the same region as the virtual network to which you connect it. +1. On the **Basics** tab, select the **Subscription** and **Resource group** +1. Enter the **Name** of the endpoint, and **Network Interface Name** +1. Select the **Region** the private endpoint should live in. The region must be the same region as the virtual network to which you connect it. 1. Select **Next: Resource**. -1. On the **Resource** tab: - 1. Select the subscription that contains your Azure Monitor Private Link Scope resource. - 1. For **Resource type**, select **Microsoft.insights/privateLinkScopes**. - 1. From the **Resource** dropdown, select the Private Link Scope you created earlier. - 1. Select **Next: Virtual Network**. + :::image type="content" source="./media/private-link-security/create-private-endpoint-basics.png" alt-text="A screenshot showing the create private endpoint basics tab." lightbox="./media/private-link-security/create-private-endpoint-basics.png"::: - :::image type="content" source="./media/private-link-security/ampls-select-private-endpoint-create-4.png" alt-text="Screenshot that shows the Create a private endpoint page in the Azure portal with the Resource tab selected." lightbox="./media/private-link-security/ampls-select-private-endpoint-create-4.png"::: +1. On the **Resource** tab,select the *Subscription* that contains your Azure Monitor Private Link Scope resource. +1. For **Resource type**, select *Microsoft.insights/privateLinkScopes*. +1. From the **Resource** dropdown, select the Private Link Scope you created earlier. -1. On the **Virtual Network** tab: - 1. Select the virtual network and subnet that you want to connect to your Azure Monitor resources. - 1. For **Network policy for private endpoints**, select **edit** if you want to apply network security groups or Route tables to the subnet that contains the private endpoint. In **Edit subnet network policy**, select the checkboxes next to **Network security groups** and **Route tables**. Select **Save**. - - For more information, see [Manage network policies for private endpoints](../../private-link/disable-private-endpoint-network-policy.md). +1. Select **Next: Virtual Network**. ++ :::image type="content" source="./media/private-link-security/create-private-endpoint-resource.png" alt-text="Screenshot that shows the Create a private endpoint page in the Azure portal with the Resource tab selected." lightbox="./media/private-link-security/create-private-endpoint-resource.png"::: ++1. On the **Virtual Network** tab, select the **Virtual network** and **Subnet** that you want to connect to your Azure Monitor resources. +1. For **Network policy for private endpoints**, select **edit** if you want to apply network security groups or Route tables to the subnet that contains the private endpoint. - 1. For **Private IP configuration**, by default, **Dynamically allocate IP address** is selected. If you want to assign a static IP address, select **Statically allocate IP address**. Then enter a name and private IP. - 1. Optionally, you can select or create an **Application security group**. You can use application security groups to group virtual machines and define network security policies based on those groups. - 1. Select **Next: DNS**. + In **Edit subnet network policy**, select the checkboxes next to **Network security groups** and **Route tables**, and select **Save**. For more information, see [Manage network policies for private endpoints](../../private-link/disable-private-endpoint-network-policy.md). ++1. For **Private IP configuration**, by default, **Dynamically allocate IP address** is selected. If you want to assign a static IP address, select **Statically allocate IP address**. Then enter a name and private IP. + Optionally, you can select or create an **Application security group**. You can use application security groups to group virtual machines and define network security policies based on those groups. +1. Select **Next: DNS**. - :::image type="content" source="./media/private-link-security/ampls-select-private-endpoint-create-5.png" alt-text="Screenshot that shows the Create a private endpoint page in the Azure portal with the Virtual Network tab selected." lightbox="./media/private-link-security/ampls-select-private-endpoint-create-5.png"::: + :::image type="content" source="./media/private-link-security/create-private-endpoint-virtual-network.png" alt-text="Screenshot that shows the Create a private endpoint page in the Azure portal with the Virtual Network tab selected." lightbox="./media/private-link-security/create-private-endpoint-virtual-network.png"::: ++1. On the **DNS** tab, select **Yes** for **Integrate with private DNS zone**, and let it automatically create a new private DNS zone. The actual DNS zones might be different from what's shown in the following screenshot. -1. On the **DNS** tab: - 1. Select **Yes** for **Integrate with private DNS zone**, and let it automatically create a new private DNS zone. The actual DNS zones might be different from what's shown in the following screenshot. + > [!NOTE] + > If you select **No** and prefer to manage DNS records manually, first finish setting up your private link. Include this private endpoint and the AMPLS configuration. Then, configure your DNS according to the instructions in [Azure private endpoint DNS configuration](../../private-link/private-endpoint-dns.md). Make sure not to create empty records as preparation for your private link setup. The DNS records you create can override existing settings and affect your connectivity with Azure Monitor. - > [!NOTE] - > If you select **No** and prefer to manage DNS records manually, first finish setting up your private link. Include this private endpoint and the AMPLS configuration. Then, configure your DNS according to the instructions in [Azure private endpoint DNS configuration](../../private-link/private-endpoint-dns.md). Make sure not to create empty records as preparation for your private link setup. The DNS records you create can override existing settings and affect your connectivity with Azure Monitor. - 1. Select **Review + create**. +1. Select **Next: Tags**, then select **Review + create**. - :::image type="content" source="./media/private-link-security/ampls-select-private-endpoint-create-6.png" alt-text="Screenshot that shows the Create a private endpoint page in the Azure portal with the DNS tab selected." lightbox="./media/private-link-security/ampls-select-private-endpoint-create-6.png"::: + :::image type="content" source="./media/private-link-security/create-private-endpoint-dns.png" alt-text="Screenshot that shows the Create a private endpoint page in the Azure portal with the DNS tab selected." lightbox="./media/private-link-security/create-private-endpoint-dns.png"::: -1. On the **Review + create** tab: - 1. Let validation pass. - 1. Select **Create**. +1. On the **Review + create** , once the validation passes select **Create**. You've now created a new private endpoint that's connected to this AMPLS. |
azure-monitor | Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/service-limits.md | |
azure-netapp-files | Azacsnap Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-troubleshoot.md | This article describes how to troubleshoot issues when using the Azure Applicati You might encounter several common issues when running AzAcSnap commands. Follow the instructions to troubleshoot the issues. If you still have issues, open a Service Request for Microsoft Support from the Azure portal and assign the request to the SAP HANA Large Instance queue. +## AzAcSnap command won't run ++In some cases AzAcSnap won't start due to the user's environment. ++### Failed to create CoreCLR ++AzAcSnap is written in .NET and the CoreCLR is an execution engine for .NET apps, performing functions such as IL byte code loading, compilation to machine code and garbage collection. In this case there is an environmental problem blocking the CoreCLR engine from starting. ++A common cause is limited permissions or environmental setup for the AzAcSnap operating system user, usually 'azacsnap'. ++The error `Failed to create CoreCLR, HRESULT: 0x80004005` can be caused by lack of write access for the azacsnap user to the system's `TMPDIR`. ++> [!NOTE] +> All command lines starting with `#` are commands run as `root`, all command lines starting with `>` are run as `azacsnap` user. ++Check the `/tmp` ownership and permissions (note in this example only the `root` user can read and write to `/tmp`): ++```bash +# ls -ld /tmp +drwx 9 root root 8192 Mar 31 10:50 /tmp +``` ++A typical `/tmp` has the following permissions, which would allow the azacsnap user to run the azacsnap command: +```bash +# ls -ld /tmp +drwxrwxrwt 9 root root 8192 Mar 31 10:51 /tmp +``` ++If it's not possible to change the `/tmp` directory permissions, then create a user specific `TMPDIR`. + +Make a `TMPDIR` for the `azacsnap` user: ++```bash +> mkdir /home/azacsnap/_tmp +> export TMPDIR=/home/azacsnap/_tmp +> azacsnap -c about +``` ++```output + + + WKO0XXXXXXXXXXXNW + Wk,.,oxxxxxxxxxxx0W + 0;.'.;dxxxxxxxxxxxKW + Xl'''.'cdxxxxxxxxxdkX + Wx,''''.,lxxxxdxdddddON + 0:''''''.;oxdddddddddxKW + Xl''''''''':dddddddddddkX + Wx,''''''''':ddddddddddddON + O:''''''''',xKxddddddoddod0W + Xl''''''''''oNW0dooooooooooxX + Wx,,,,,,'','c0WWNkoooooooooookN + WO:',,,,,,,,;cxxxxooooooooooooo0W + Xl,,,,,,,;;;;;;;;;;:llooooooooldX + Nx,,,,,,,,,,:c;;;;;;;;coooollllllkN + WO:,,,,,,,,,;kXkl:;;;;,;lolllllllloOW + Xl,,,,,,,,,,dN WNOl:;;;;:lllllllllldK + 0c,;;;;,,,;lK NOo:;;:clllllllllo0W + WK000000000N NK000KKKKKKKKKKXW + + + Azure Application Consistent Snapshot Tool + AzAcSnap 7a (Build: 1AA8343) +``` ++> [!IMPORTANT] +> Changing the user's `TMPDIR` would need to be made permanent by changing the user's profile (e.g. `$HOME/.bashrc` or `$HOME/.bash_profile`). There would also be a need to clean-up the `TMPDIR` on system reboot, this is typically automatic for `/tmp`. + ## Check log files, result files, and syslog Some of the best sources of information for investigating AzAcSnap issues are the log files, result files, and the system log. Make sure the installer added the location of these files to the AzAcSnap user's This command output shows that the connection key hasn't been set up correctly with the `hdbuserstore Set` command. - ```bash - hdbsql -n 172.18.18.50 -i 00 -U AZACSNAP "select version from sys.m_database" - ``` +```bash +hdbsql -n 172.18.18.50 -i 00 -U AZACSNAP "select version from sys.m_database" +``` - ```output - * -10104: Invalid value for KEY (AZACSNAP) - ``` +```output +* -10104: Invalid value for KEY (AZACSNAP) +``` For more information on setup of the `hdbuserstore`, see [Get started with AzAcSnap](azacsnap-get-started.md). In the preceding example, adding the `DATABASE BACKUP ADMIN` privilege to the SY - [Tips and tricks for using AzAcSnap](azacsnap-tips.md) - [AzAcSnap command reference](azacsnap-cmd-ref-configure.md)++ |
azure-netapp-files | Azure Netapp Files Create Volumes Smb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md | Before creating an SMB volume, you need to create an Active Directory connection > [!IMPORTANT] > The SMB Continuous Availability feature is currently in preview. You need to submit a waitlist request for accessing the feature through the **[Azure NetApp Files SMB Continuous Availability Shares Public Preview waitlist submission page](https://aka.ms/anfsmbcasharespreviewsignup)**. Wait for an official confirmation email from the Azure NetApp Files team before using the Continuous Availability feature. - > - You should enable Continuous Availability only for Citrix App Layering, SQL Server, and [FSLogix user profile containers](../virtual-desktop/create-fslogix-profile-container.md). Using SMB Continuous Availability shares for workloads other than Citrix App Layering, SQL Server, and FSLogix user profile containers is *not* supported. This feature is currently supported on Windows SQL Server. Linux SQL Server is not currently supported. If you are using a non-administrator (domain) account to install SQL Server, ensure that the account has the required security privilege assigned. If the domain account does not have the required security privilege (`SeSecurityPrivilege`), and the privilege cannot be set at the domain level, you can grant the privilege to the account by using the **Security privilege users** field of Active Directory connections. See [Create an Active Directory connection](create-active-directory-connections.md#create-an-active-directory-connection). + + >[!IMPORTANT] + >You should enable Continuous Availability for Citrix App Layering, SQL Server, and [FSLogix user profile containers](../virtual-desktop/create-fslogix-profile-container.md). Using SMB Continuous Availability shares for workloads other than Citrix App Layering, SQL Server, and FSLogix user profile containers is *not* supported. This feature is currently supported on Windows SQL Server. Linux SQL Server is not currently supported. If you are using a non-administrator (domain) account to install SQL Server, ensure that the account has the required security privilege assigned. If the domain account does not have the required security privilege (`SeSecurityPrivilege`), and the privilege cannot be set at the domain level, you can grant the privilege to the account by using the **Security privilege users** field of Active Directory connections. See [Create an Active Directory connection](create-active-directory-connections.md#create-an-active-directory-connection). **Custom applications are not supported with SMB Continuous Availability.** |
azure-netapp-files | Azure Netapp Files Solution Architectures | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md | This section provides references for solutions for Linux OSS applications and da ### Oracle * [Oracle Database with Azure NetApp Files - Azure Example Scenarios](/azure/architecture/example-scenario/file-storage/oracle-azure-netapp-files)-* [Oracle Databases on Microsoft Azure Using Azure NetApp Files](https://www.netapp.com/media/17105-tr4780.pdf) * [Oracle VM images and their deployment on Microsoft Azure: Shared storage configuration options](../virtual-machines/workloads/oracle/oracle-vm-solutions.md#shared-storage-configuration-options)+* [Oracle On Azure IaaS Recommended Practices For Success](https://github.com/Azure/Oracle-Workloads-for-Azure/blob/main/Oracle%20on%20Azure%20IaaS%20Recommended%20Practices%20for%20Success.pdf) * [Run Your Most Demanding Oracle Workloads in Azure without Sacrificing Performance or Scalability](https://techcommunity.microsoft.com/t5/azure-architecture-blog/run-your-most-demanding-oracle-workloads-in-azure-without/ba-p/3264545) * [Oracle database performance on Azure NetApp Files single volumes](performance-oracle-single-volumes.md) * [Benefits of using Azure NetApp Files with Oracle Database](solutions-benefits-azure-netapp-files-oracle-database.md)+* [Oracle Databases on Microsoft Azure Using Azure NetApp Files](https://www.netapp.com/media/17105-tr4780.pdf) ### Financial analytics and trading * [Host a Murex MX.3 workload on Azure](/azure/architecture/example-scenario/finance/murex-mx3-azure) |
azure-netapp-files | Enable Continuous Availability Existing SMB | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/enable-continuous-availability-existing-SMB.md | You can enable the SMB Continuous Availability (CA) feature when you [create a n > See the [**Enable Continuous Availability**](azure-netapp-files-create-volumes-smb.md#continuous-availability) option for additional details and considerations. >[!IMPORTANT]-> You should enable Continuous Availability only for [Citrix App Layering](https://docs.citrix.com/en-us/citrix-app-layering/4.html), SQL Server, and [FSLogix user profile containers](../virtual-desktop/create-fslogix-profile-container.md). Using SMB Continuous Availability shares for any other workload is not supported. This feature is currently supported on Windows SQL Server. Linux SQL Server is not currently supported. +> You should enable Continuous Availability for [Citrix App Layering](https://docs.citrix.com/en-us/citrix-app-layering/4.html), SQL Server, and [FSLogix user profile containers](../virtual-desktop/create-fslogix-profile-container.md). Using SMB Continuous Availability shares for any other workload is not supported. This feature is currently supported on Windows SQL Server. Linux SQL Server is not currently supported. > If you are using a non-administrator (domain) account to install SQL Server, ensure that the account has the required security privilege assigned. If the domain account does not have the required security privilege (`SeSecurityPrivilege`), and the privilege cannot be set at the domain level, you can grant the privilege to the account by using the **Security privilege users** field of Active Directory connections. See [Create an Active Directory connection](create-active-directory-connections.md#create-an-active-directory-connection). ## Steps 1. Make sure that you have [registered the SMB Continuous Availability Shares](https://aka.ms/anfsmbcasharespreviewsignup) feature. -2. Click the SMB volume that you want to have SMB CA enabled. Then click **Edit**. +2. Select the SMB volume that you want to have SMB CA enabled. Then select **Edit**. 3. On the Edit window that appears, select the **Enable Continuous Availability** checkbox.  |
azure-web-pubsub | Howto Integrate App Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-integrate-app-service.md | + + Title: Integrate - Build a real-time collaborative whiteboard using Azure Web PubSub and deploy it to Azure App Service +description: A how-to guide about how to use Azure Web PubSub to enable real-time collaboration on a digital whiteboard and deploy as a Web App using Azure App Service +++++ Last updated : 05/17/2023++# How-to: build a real-time collaborative whiteboard using Azure Web PubSub and deploy it to Azure App Service ++A new class of applications is reimagining what modern work could be. While [Microsoft Word](https://www.microsoft.com/microsoft-365/word) brings editors together, [Figma](https://www.figma.com) gathers up designers on the same creative endeavor. This class of applications builds on a user experience that makes us feel connected with our remote collaborators. From a technical point of view, user's activities need to be synchronized across users' screens at a low latency. ++## Overview +In this how-to guide, we take a cloud-native approach and use Azure services to build a real-time collaborative whiteboard and we deploy the project as a Web App to Azure App Service. The whiteboard app is accessible in the browser and allows anyone can draw on the same canvas. +++> [!div class="nextstepaction"] +> [Check out live whiteboard demo](https://azure.github.io/azure-webpubsub/demos/whiteboard) ++### Architecture ++|Azure service name | Purpose | Benefits | +|-|-|| +|[Azure App Service](https://learn.microsoft.com/azure/app-service/) | Provides the hosting environment for the backend application, which is built with [Express](https://expressjs.com/) | Fully managed environment for application backends, with no need to worry about infrastructure where the code runs +|[Azure Web PubSub](https://learn.microsoft.com/azure/azure-web-pubsub/overview) | Provides low-latency, bi-directional data exchange channel between the backend application and clients | Drastically reduces server load by freeing server from managing persistent WebSocket connections and scales to 100 K concurrent client connections with just one resource +++## Prerequisites +You can find detailed explanation of the [data flow](#data-flow) at the end of this how-to guide as we're going to focus on building and deploying the whiteboard app first. + +In order to follow the step-by-step guide, you need +> [!div class="checklist"] +> * An [Azure](https://portal.azure.com/) account. If you don't have an Azure subscription, create an [Azure free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin. +> * [Azure CLI](/cli/azure/install-azure-cli) (version 2.29.0 or higher) or [Azure Cloud Shell](../cloud-shell/quickstart.md) to manage Azure resources. ++## Create Azure resources using Azure CLI +### 1. Sign in +1. Sign in to Azure CLI by running the following command. + ```azurecli-interactive + az login + ``` ++1. Create a resource group on Azure. + ```azurecli-interactive + az group create \ + --location "westus" \ + --name "whiteboard-group" + ``` ++### 2. Create a Web App resource +1. Create a free App Service plan. + ```azurecli-interactive + az appservice plan create \ + --resource-group "whiteboard-group" \ + --name "demo" \ + --sku FREE + --is-linux + ``` ++1. Create a Web App resource + ```azurecli-interactive + az webapp create \ + --resource-group "whiteboard-group" \ + --name "whiteboard-app" \ + --plan "demo" \ + --runtime "NODE:18-lts" + ``` ++### 3. Create a Web PubSub resource +1. Create a Web PubSub resource. + ```azurecli-interactive + az webpubsub create \ + --name "whiteboard-app" \ + --resource-group "whiteboard-group" \ + --location "westus" \ + --sku Free_F1 + ``` ++1. Show and store the value of `primaryConnectionString` somewhere for later use. + ```azurecli-interactive + az webpubsub key show \ + --name "whiteboard-app" \ + --resource-group "whiteboard-group" + ``` ++## Get the application code +Run the following command to get a copy of the application code. You can find detailed explanation of the [data flow](#data-flow) at the end of this how-to guide. +```bash +git clone https://github.com/Azure/awps-webapp-sample.git +``` ++## Deploy the application to App Service +1. App Service supports many deployment workflows. For this guide, we're going to deploy a ZIP package. Run the following commands to prepare the ZIP. + ```bash + npm install + npm run build + zip -r app.zip * + ``` ++2. Use the following command to deploy it to Azure App Service. + ```azurecli-interactive + az webapp deployment source config-zip \ + --resource-group "whiteboard-group" \ + --name "whiteboard-app" \ + --src app.zip + ``` ++3. Set Azure Web PubSub connection string in the application settings. Use the value of `primaryConnectionString` you stored from an earlier step. + ```azurecli-interactive + az webapp config appsettings set \ + --resource-group "whiteboard-group" \ + --name "whiteboard-app" \ + --setting Web_PubSub_ConnectionString="<primaryConnectionString>" + ``` ++## Configure upstream server to handle events coming from Web PubSub +Whenever a client sends a message to Web PubSub service, the service sends an HTTP request to an endpoint you specify. This mechanism is what your backend server uses to further process messages, for example, if you can persist messages in a database of choice. ++As is with HTTP requests, Web PubSub service needs to know where to locate your application server. Since the backend application is now deployed to App Service, we get a publically accessible domain name for it. +1. Show and store the value of `name` somewhere. + ```azurecli-interactive + az webapp config hostname list \ + --resource-group "whiteboard-group" + --webapp-name "whiteboard-app" + ``` ++1. The endpoint we decided to expose on the backend server is [`/eventhandler`](https://github.com/Azure/awps-webapp-sample/blob/main/whiteboard/server.js#L17) and the `hub` name for whiteboard app [`"sample_draw"`](https://github.com/Azure/awps-webapp-sample/blob/main/whiteboard/server.js#l14) + ```azurecli-interactive + az webpubsub hub create \ + --resource-group "whiteboard-group" \ + --name "whiteboard-app" \ + --hub-name "sample_draw" \ + --event-handler url-template="https://<Replace with the hostname of your Web App resource>/eventhandler" user-event-pattern="*" system-event="connected" system-event="disconnected" + ``` +> [!IMPORTANT] +> `url-template` has three parts: protocol + hostname + path, which in our case is `https://<The hostname of your Web App resource>/eventhandler`. ++## View the whiteboard app in a browser +Now head over to your browser and visit your deployed Web App. It's recommended to have multiple browser tabs open so that you can experience the real-time collaborative aspect of the app. Or better, share the link with a colleague or friend. ++## Data flow +### Overview +The data flow section dives deeper into how the whiteboard app is built. The whiteboard app has two transport methods. +- HTTP service written as an Express app and hosted on App Service. +- WebSocket connections managed by Azure Web PubSub. ++By using Azure Web PubSub to manage WebSocket connections, the load on the Web App is reduced. Apart from authenticating the client and serving images, the Web App isn't involved synchronizing drawing activities. A client's drawing activities are directly sent to Web PubSub and broadcasted to all clients in a group. ++At any point in time, there maybe more than one client drawing. If the Web App were to manage WebSocket connections on its own, it needed to broadcast every drawing activity to all other clients. The huge traffic and processing are a large burden to the server. ++ :::column::: + The client, built with [Vue](https://vuejs.org/), makes an HTTP request for a Client Access Token to an endpoint `/negotiate`. The backend application is an [Express app](https://expressjs.com/) and hosted as a Web App using Azure App Service. + :::column-end::: + :::column::: + :::image type="content" source="./media/howto-integrate-app-service/dataflow-1.jpg" alt-text="Screenshot of step one of app data flow." lightbox="./media/howto-integrate-app-service/dataflow-1.jpg"::: + :::column-end::: ++ :::column::: + When the backend application successfully [returns the Client Access Token](https://github.com/Azure/awps-webapp-sample/blob/main/whiteboard/server.js#L62) to the connecting client, the client uses it to establish a WebSocket connection with Azure Web PubSub. + :::column-end::: + :::column::: + :::image type="content" source="./media/howto-integrate-app-service/dataflow-2.jpg" alt-text="Screenshot of step two of app data flow." lightbox="./media/howto-integrate-app-service/dataflow-2.jpg"::: + :::column-end::: ++ :::column::: + If the handshake with Azure Web PubSub is successful, the client is added to a group named `draw`, effectively subscribing to messages published to this group. Also, the client is given the permission to send messages to the [`draw` group](https://github.com/Azure/awps-webapp-sample/blob/main/whiteboard/server.js#L64). + :::column-end::: + :::column::: + :::image type="content" source="./media/howto-integrate-app-service/dataflow-3.jpg" alt-text="Screenshot of step three of app data flow." lightbox="./media/howto-integrate-app-service/dataflow-3.jpg"::: + :::column-end::: +> [!NOTE] +> To keep this how-to guide focused, all connecting clients are added to the same group named `draw` and is given the permission to send messages to this group. To manage client connections at a granular level, see the full references of the APIs provided by Azure Web PubSub. ++ :::column::: + Azure Web PubSub notifies the backend application that a client has connected. The backend application handles the `onConnected` event by calling the `sendToAll()`, with a payload of the latest number of connected clients. + :::column-end::: + :::column::: + :::image type="content" source="./media/howto-integrate-app-service/dataflow-4.jpg" alt-text="Screenshot of step four of app data flow." lightbox="./media/howto-integrate-app-service/dataflow-4.jpg"::: + :::column-end::: +> [!NOTE] +> It is important to note that if there are a large number of online users in the `draw` group, with **a single** network call from the backend application, all the online users will be notified that a new user has just joined. This drastically reduces the complexity and load of the backend application. ++ :::column::: + As soon as a client establishes a persistent connection with Web PubSub, it makes an HTTP request to the backend application to fetch the latest shape and background data at [`/diagram`](https://github.com/Azure/awps-webapp-sample/blob/main/whiteboard/server.js#L70). An HTTP service hosted on App Service can be combined with Web PubSub. App Service takes care serving HTTP endpoints, while Web PubSub takes care of managing WebSocket connections. + :::column-end::: + :::column::: + :::image type="content" source="./media/howto-integrate-app-service/dataflow-5.jpg" alt-text="Screenshot of step five of app data flow." lightbox="./media/howto-integrate-app-service/dataflow-5.jpg"::: + :::column-end::: ++ :::column::: + Now that the clients and backend application have two ways to exchange data. One is the conventional HTTP request-response cycle and the other is the persistent, bi-directional channel through Web PubSub. The drawing actions, which originate from one user and need to be broadcasted to all users as soon as it takes place, are delivered through Web PubSub. It doesn't require involvement of the backend application. + :::column-end::: + :::column::: + :::image type="content" source="./media/howto-integrate-app-service/dataflow-6.jpg" alt-text="Screenshot of step six of app data flow." lightbox="./media/howto-integrate-app-service/dataflow-6.jpg"::: + :::column-end::: ++## Clean up resources +Although the application uses only the free tiers of both services, it's best practice to delete resources if you no longer need them. You can delete the resource group along with the resources in it using following command, ++```azurecli-interactive +az group delete + --name "whiteboard-group" +``` ++## Next steps +> [!div class="nextstepaction"] +> [Check out more demos built with Web PubSub](https://azure.github.io/azure-webpubsub/demos/chat) + |
azure-web-pubsub | Samples App Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/samples-app-scenarios.md | + + Title: Azure Web PubSub samples - app scenarios ++description: A list of code samples showing how Web PubSub is used in a wide variety of web applications ++++ Last updated : 05/15/2023++zone_pivot_groups: azure-web-pubsub-samples-app-scenarios ++# Azure Web PubSub samples - app scenarios ++Bi-directional, low-latency and real-time data exchange between clients and server was a *nice-to-have* feature, but now end users expect this behavior **by default**. Azure Web PubSub is used in a wide range of industries, powering applications like +- dashboard for real-time monitoring in finance, retail and manufacturing +- cross-platform chat room in health care and social networking +- competitive bidding in online auctions, +- collaborative coauthoring in modern work applications +- and a lot more ++Here's a list of code samples written by Azure Web PubSub team and the community. To have your project featured here, consider submitting a Pull Request. ++| App scenario | Industry | +| | -- | +| [Unity multiplayer gaming](https://github.com/Azure/azure-webpubsub/tree/main/samples/csharp/unity-multiplayer-sample) | Gaming | +| [Chat app with persistent storage](https://github.com/Azure/azure-webpubsub/tree/main/samples/csharp/chatapp-withstorage) | Gaming | ++| App scenario | Industry | +| | -- | +| [Cross-platform chat](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/chatapp/Startup.cs#L29) | Social | +| [Collaborative code editor](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/chatapp/Startup.cs#L29) | Modern work | ++| App scenario | Industry | +| | -- | +| [Chat app](https://github.com/Azure/azure-webpubsub/tree/main/samples/java/chatapp) | Social | ++| App scenario | Industry | +| | -- | +| [Chat app](https://github.com/Azure/azure-webpubsub/tree/main/samples/python/chatapp) | Social | |
azure-web-pubsub | Samples Authenticate And Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/samples-authenticate-and-connect.md | + + Title: Azure Web PubSub samples - authenticate and connect ++description: A list of code samples showing how to authenticate and connect to Web PubSub resource(s) ++++ Last updated : 05/15/2023++zone_pivot_groups: azure-web-pubsub-samples-authenticate-and-connect ++# Azure Web PubSub samples - Authenticate and connect ++To make use of your Azure Web PubSub resource, you need to authenticate and connect to the service first. Azure Web PubSub service distinguishes two roles and they're given a different set of capabilities. + +## Client +The client can be a browser, a mobile app, an IoT device or even an EV charging point as long as it supports WebSocket. A client is limited to publishing and subscribing to messages. ++## Application server +While the client's role is often limited, the application server's role goes beyond simply receiving and publishing messages. Before a client tries to connect with your Web PubSub resource, it goes to the application server for a Client Access Token first. The token is used to establish a persistent WebSocket connection with your Web PubSub resource. ++| Use case | Description | +| | -- | +| [Using connection string](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/chatapp/Startup.cs#L29) | Applies to application server only. +| [Using Client Access Token](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/chatapp/wwwroot/https://docsupdatetracker.net/index.html#L13) | Applies to client only. Client Access Token is generated on the application server. +| [Using Azure Active Directory](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/chatapp-aad/Startup.cs#L26) | Using Azure AD for authorization offers improved security and ease of use compared to Access Key authorization. +| [Anonymous connection](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/clientWithCert/client/Program.cs#L15) | Anonymous connection allows clients to connect with Azure Web PubSub directly without going to an application server for a Client Access Token first. This is useful for clients that have limited networking capabilities, like an EV charging point. ++| Use case | Description | +| | -- | +| [Using connection string](https://github.com/Azure/azure-webpubsub/blob/main/samples/javascript/chatapp/sdk/server.js#L9) | Applies to application server only. +| [Using Client Access Token](https://github.com/Azure/azure-webpubsub/blob/main/samples/javascript/chatapp/sdk/src/index.js#L5) | Applies to client only. Client Access Token is generated on the application server. +| [Using Azure Active Directory](https://github.com/Azure/azure-webpubsub/blob/main/samples/javascript/chatapp-aad/server.js#L24) | Using Azure AD for authorization offers improved security and ease of use compared to Access Key authorization. ++| Use case | Description | +| | -- | +| [Using connection string](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/java/chatapp/src/main/java/com/webpubsub/tutorial/App.java#L21) | Applies to application server only. +| [Using Client Access Token](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/java/chatapp/src/main/resources/public/https://docsupdatetracker.net/index.html#L12) | Applies to client only. Client Access Token is generated on the application server. +| [Using Azure Active Directory](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/java/chatapp-aad/src/main/java/com/webpubsub/tutorial/App.java#L22) | Using Azure AD for authorization offers improved security and ease of use compared to Access Key authorization. ++| Use case | Description | +| | -- | +| [Using connection string](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/python/chatapp/server.py#L19) | Applies to application server only. +| [Using Client Access Token](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/python/chatapp/public/https://docsupdatetracker.net/index.html#L13) | Applies to client only. Client Access Token is generated on the application server. +| [Using Azure Active Directory](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/python/chatapp-aad/server.py#L21) | Using Azure AD for authorization offers improved security and ease of use compared to Access Key authorization. |
azure-web-pubsub | Samples Platforms And Frameworks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/samples-platforms-and-frameworks.md | + + Title: Azure Web PubSub samples - platforms and frameworks ++description: A list of code samples showing how Web PubSub is used in a wide variety of platforms and frameworks ++++ Last updated : 05/15/2023+++# Azure Web PubSub samples - Platforms and frameworks ++As Azure Web PubSub is built on top of WebSocket, the only requirement is having the support of WebSocket, which is available in all modern browsers and major app development platforms. ++## Web front-end +- [Blazor WebAssembly](https://github.com/Azure/azure-webpubsub/tree/main/samples/csharp/blazor-webassembly) +- [React](https://github.com/Azure/azure-webpubsub/tree/main/samples/javascript/chatapp/react) +- [Vue](https://github.com/Azure/azure-webpubsub/tree/main/samples/javascript/scoreboard) ++<!-- ## Cross-platform + :::column span=""::: + [React native](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/storage/storage-blob/samples/v12/javascript/connectionStringAuth.js) + :::column-end::: ++## Gaming +- [Unity](https://github.com/Azure/azure-webpubsub/tree/main/samples/csharp/unity-multiplayer-sample) +++<!-- ## Low-code / no-code platform + :::column span=""::: + [Power Apps](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/storage/storage-blob/samples/v12/javascript/connectionStringAuth.js) + :::column-end::: |
azure-web-pubsub | Tutorial Serverless Static Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-serverless-static-web-app.md | Title: Tutorial - Create a serverless chat app with Azure Web PubSub service and Azure Static Web Apps + Title: Integrate - Create a chat app using Azure Web PubSub and deploy to Azure Static Web Apps description: A tutorial about how to use Azure Web PubSub service and Azure Static Web Apps to build a serverless chat application. Previously updated : 06/03/2022 Last updated : 05/16/2022 # Tutorial: Create a serverless chat app with Azure Web PubSub service and Azure Static Web Apps -Azure Web PubSub service helps you build real-time messaging web applications using WebSockets. By using Azure Static Web Apps, you can automatically build and deploy full-stack web apps to Azure from a code repository. In this tutorial, you'll learn how to use Web PubSub service and Static Web Apps to build a serverless, real-time chat room messaging application. +Azure Web PubSub helps you build real-time messaging web applications using WebSocket. Azure Static Web Apps helps you build and deploy full-stack web apps automatically to Azure from a code repository. In this tutorial, you learn how to use Web PubSub and Static Web Apps together to build a real-time chat room application. -In this tutorial, you'll learn how to: +More specifically, you learn how to: > [!div class="checklist"] > * Build a serverless chat app GitHub or Azure Repos provide source control for Static Web Apps. Azure monitors The sample chat room application provided with this tutorial has the following workflow. -1. When a user signs in to the app, the Azure Functions `login` API will be triggered to generate a Web PubSub service client connection URL. +1. When a user signs in to the app, the Azure Functions `login` API is triggered to generate a Web PubSub service client connection URL. 1. When the client initializes the connection request to Web PubSub, the service sends a system `connect` event that triggers the Functions `connect` API to authenticate the user.-1. When a client sends a message to Azure Web PubSub service, the service will respond with a user `message` event and the Functions `message` API will be triggered to broadcast the message to all the connected clients. +1. When a client sends a message to Azure Web PubSub service, the service responds with a user `message` event and the Functions `message` API is triggered to broadcast, the message to all the connected clients. 1. The Functions `validate` API is triggered periodically for [CloudEvents Abuse Protection](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#4-abuse-protection) when the events in Azure Web PubSub are configured with predefined parameter `{event}`, that is, https://$STATIC_WEB_APP/api/{event}. > [!NOTE] The sample chat room application provided with this tutorial has the following w ## Create a repository -This article uses a GitHub template repository to make it easy for you to get started. The template features a starter app that you will deploy to Azure Static Web Apps. +This article uses a GitHub template repository to make it easy for you to get started. The template features a starter app that you deploy to Azure Static Web Apps. 1. Go to [https://github.com/Azure/awps-swa-sample/generate](https://github.com/login?return_to=/Azure/awps-swa-sample/generate) to create a new repo for this tutorial. 1. Select yourself as **Owner** and name your repository **my-awps-swa-app**. Before you can navigate to your new static site, the deployment build must first ## Configure the Web PubSub event handler -You're very close to complete. The last step is to configure Web PubSub transfer client requests to your function APIs. +You're very close to complete. The last step is to configure Web PubSub so that client requests are transferred to your function APIs. 1. Run the following command to configure Web PubSub service events. It maps functions under the `api` folder in your repo to the Web PubSub event handler. az group delete --name my-awps-swa-group ## Next steps -In this quickstart, you learned how to run a serverless chat application. Now, you could start to build your own application. +In this quickstart, you learned how to develop and deploy a serverless chat application. Now, you can start building your own application. > [!div class="nextstepaction"]-> [Tutorial: Client streaming using subprotocol](tutorial-subprotocol.md) +> [Have fun with playable demos](https://azure.github.io/azure-webpubsub/) > [!div class="nextstepaction"] > [Azure Web PubSub bindings for Azure Functions](reference-functions-bindings.md) -> [!div class="nextstepaction"] -> [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples) + |
backup | Backup Mabs Whats New Mabs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-whats-new-mabs.md | The following table lists the included features in MABS V4: | Windows Server 2022 support | You can install MABS V4 on and protect Windows Server 2022. To use MABS V4 with *WS2022*, you can either upgrade your operation system (OS) to *WS2022* before installing/upgrading to MABS V4, or you can upgrade your OS after installing/upgrading V4 on *WS2019*. <br><br> MABS V4 is a full release, and can be installed directly on Windows Server 2022, Windows Server 2019, or can be upgraded from MABS V3. Learn more [about the installation prerequisites](backup-azure-microsoft-azure-backup.md#software-package) before you upgrade to or install Backup Server V4. | | SQL Server 2022 support | You can install MABS V4 with SQL 2022 as the MABS database. You can upgrade the SQL Server from SQL 2017 to SQL 2022, or install it fresh. You can also back up SQL 2022 workload with MABS V4. | | Private Endpoint Support | With MABS V4, you can use private endpoints to send your online backups to Azure Backup Recovery Services vault. [Learn more](backup-azure-private-endpoints-concept.md). |-| Azure Stack HCI 22H2 support | MABS V4 now supports protection of workloads running in Azure Stack HCI V1 till 22H2. [Learn more](back-up-azure-stack-hyperconverged-infrastructure-virtual-machines.md). | +| Azure Stack HCI 22H2 support | MABS V4 now supports protection of workloads running in Azure Stack HCI from V1 to 22H2. [Learn more](back-up-azure-stack-hyperconverged-infrastructure-virtual-machines.md). | | VMware 8.0 support | MABS V4 can now back up VMware VMs running on VMware 8.0. MABS V4 supports VMware, version 6.5 to 8.0. [Learn more](backup-azure-backup-server-vmware.md). <br><br> Note that MABS V4 doesn't support the DataSets feature added in vSphere 8.0. | | Item-level recovery from online recovery points for Hyper-V and Stack HCI VMs running Windows Server | With MABS V4, you can perform item-level recovery of files and folders from your online recovery point for VMs running Windows Server on Hyper-V or Stack HCI without downloading the entire recovery point. <br><br> Go to the *Recovery* pane, select a *VM online recovery point* and double-click the *recoverable item* to browse and recover its contents at a file/folder level. <br><br> [Learn more](back-up-hyper-v-virtual-machines-mabs.md). | | Parallel Restore of VMware and Hyper-V VMs | MABS V4 supports parallel restore of [VMware](restore-azure-backup-server-vmware.md) and [Hyper-V](back-up-hyper-v-virtual-machines-mabs.md) virtual machines. With earlier versions of MABS, restore of VMware VM and Hyper-V virtual machine was restricted to only one restore job at a time. With MABS V4, by default you can restore *eight* VMs in parallel and this number can be increased using a registry key. | |
baremetal-infrastructure | Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/requirements.md | The following sections identify the requirements to use Nutanix Clusters on Azur ## My Nutanix account requirements -For more information, see "NC2 on Azure Subscription and Billing" in [Nutanix Cloud Clusters on Azure Deployment and User Guide](https://download.nutanix.com/documentation/hosted/Nutanix-Cloud-Clusters-Azure.pdf). +For more information, see "NC2 on Azure Subscription and Billing" in [Nutanix Cloud Clusters on Azure Deployment and User Guide] +(https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Cloud-Clusters-Azure:Nutanix-Cloud-Clusters-Azure). ## Networking requirements For more information, see "NC2 on Azure Subscription and Billing" in [Nutanix Cl * After a cluster is created, you'll need Virtual IP addresses for both the on-premises cluster and the cluster running in Azure. * Outbound internet access on your Azure portal. * Azure Directory Service resolves the FQDN: -gateway-external-api.console.nutanix.com. +gateway-external-api.cloud.nutanix.com. ## Other requirements |
bastion | Shareable Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/shareable-link.md | By default, users in your org will have only read access to shared links. If a u * Shareable Links isn't currently supported over Virtual WAN. * Shareable Links does not support connection to on-premises or non-Azure VMs and VMSS.  * The Standard SKU is required for this feature.+* Bastion only supports 50 requests, including creates and deletes, for shareable links at a time. ## Prerequisites |
chaos-studio | Chaos Studio Chaos Engineering Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-chaos-engineering-overview.md | Title: Understanding chaos engineering and resilience with Azure Chaos Studio + Title: Understand chaos engineering and resilience with Chaos Studio description: Understand the concepts of chaos engineering and resilience. -# Understanding chaos engineering and resilience +# Understand chaos engineering and resilience -Before you start using Chaos Studio, it's useful to understand the core site reliability engineering concepts being applied. +Before you start using Azure Chaos Studio, it's useful to understand the core site reliability engineering concepts being applied. ## What is resilience? -Creating large-scale, distributed applications has never been easier. Infrastructure is hosted in the cloud, programming language support is diverse, and there are a plethora of open source and hosted components and services to build upon. Unfortunately, there is no reliability guarantee for these underlying components and dependencies, or for systems built upon them. Infrastructure can go offline and service disruptions or outages can occur at any time. Minor disruptions in one area can be magnified and have longstanding side effects in another. +It's never been easier to create large-scale, distributed applications. Infrastructure is hosted in the cloud, and programming language support is diverse. There are also many open-source and hosted components and services to build on. -Applications and services need to plan for and accommodate service outages, disruptions to known and unknown dependencies, sudden unexpected load, and latencies throughout the system. Applications and services need to be designed to handle failure and be hardened against disruptions. +Unfortunately, there's no reliability guarantee for these underlying components and dependencies, or for systems built on them. Infrastructure can go offline, and service disruptions or outages can occur at any time. Minor disruptions in one area can be magnified and have longstanding side effects in another. -Applications and services that deal with stresses and issues gracefully are **resilient**. Individual component reliability is good, but **resilience is a property of the entire system**. End to end system resilience needs to be validated in an integrated, production-like environment with the conditions and load that will be faced in production. +Applications and services must plan for and accommodate issues like: ++- Service outages. +- Disruptions to known and unknown dependencies. +- Sudden unexpected load. +- Latencies throughout the system. ++Applications and services must be designed to handle failure and be hardened against disruptions. ++Applications and services that deal with stresses and issues gracefully are *resilient*. Individual component reliability is good, but *resilience is a property of the entire system*. End-to-end system resilience must be validated in an integrated, production-like environment with the conditions and load that's faced in production. ## What are chaos engineering and fault injection? -**Chaos engineering** is the practice of subjecting applications and services to real world stresses and failures in order to build and validate resilience to unreliable conditions and missing dependencies. **Fault injection** is the act of introducing an error to a system. Different faults, such as network latency or loss of access to storage, can be used to target system components, causing scenarios that an application or service must be able to handle or recover from. A Chaos experiment is the application of faults individually, in parallel, and/or sequentially against one or more subscription resources or dependencies with the goal of monitoring system behavior and health and acting upon any issues that arise. An experiment can represent a real world scenario such as a datacenter power outage or network latency to a DNS server. It can also be used to simulate edge conditions that occur with Black Friday shopping sprees or when concert tickets go on sale for a popular band. +- **Chaos engineering**: The practice of subjecting applications and services to real-world stresses and failures. The goal is to build and validate resilience to unreliable conditions and missing dependencies. +- **Fault injection**: The act of introducing an error to a system. You can use different faults, such as network latency or loss of access to storage, to target system components. You can create scenarios that an application or service must be able to handle or recover from. ++A chaos experiment is the application of faults individually, in parallel, or sequentially against one or more subscription resources or dependencies. The goal is to monitor system behavior and health so that you can act on any issues that arise. ++An experiment can represent a real-world scenario, such as a datacenter power outage or network latency to a DNS server. It can also be used to simulate edge conditions that occur. Examples are Black Friday shopping sprees or when concert tickets go on sale for a popular band. |
chaos-studio | Chaos Studio Chaos Experiments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-chaos-experiments.md | Title: Chaos experiments in Azure Chaos Studio -description: Understand the concept of a chaos experiment in Azure Chaos Studio. What are the pieces of a chaos experiment? How can I create a chaos experiment? +description: Understand the concept of a chaos experiment in Azure Chaos Studio. What are the parts of a chaos experiment? How can you create a chaos experiment? -In Chaos Studio, you create and run chaos experiments. A chaos experiment is an Azure resource that describes the faults that should be run and the resources those faults should be run against. An experiment is divided into two sections: -- **Selectors**: Selectors are groups of target resources that will have faults or other actions run against them. A selector allows you to logically group resources for reuse across multiple actions. For example, you might have a selector named "AllNonProdEastUSVMs" in which you have added all the non-production virtual machines in East US. You could then apply CPU pressure followed by virtual memory pressure to those virtual machines just by referencing the selector.-- **Logic**: The rest of the experiment describes how and when to run faults. An experiment is organized into **steps** that run one after the other. Each step has one or more **branches** that run at the same time. Steps and branches allow you to inject multiple faults across resources in your environment in parallel. Each branch has one or more **actions** which are either the faults you want to run or time delays. **Faults** are actions that cause some disruption. Most faults take one or more **parameters**, such as the duration to run the fault or the amount of stress to apply.+In Azure Chaos Studio, you create and run chaos experiments. A chaos experiment is an Azure resource that describes the faults that should be run and the resources those faults should be run against. - +An experiment is divided into two sections: ++- **Selectors**: Selectors are groups of target resources that have faults or other actions run against them. A selector allows you to logically group resources for reuse across multiple actions. ++ For example, you might have a selector named `AllNonProdEastUSVMs`, in which you've added all the nonproduction virtual machines in East US. You could then apply CPU pressure followed by virtual memory pressure to those virtual machines by referencing the selector. +- **Logic**: The rest of the experiment describes how and when to run faults. An experiment is organized into *steps* that run one after the other. Each step has one or more *branches* that run at the same time. Steps and branches allow you to inject multiple faults across resources in your environment in parallel. ++ Each branch has one or more *actions*, which are either the faults you want to run or time delays. *Faults* are actions that cause some disruption. Most faults take one or more *parameters*, such as the duration to run the fault or the amount of stress to apply. ++ ## Cross-subscription and cross-tenant experiments A chaos experiment is an Azure resource deployed to a subscription, resource group, and region. You can use the Azure portal or the Chaos Studio REST API to create, update, start, cancel, and view the status of an experiment. -Chaos experiments can target resources in a different subscription than the experiment as long as the subscription is within the same Azure tenant. Chaos experiments can target resources in a different region than the experiment as long as the region is a supported region for Chaos Studio. +Chaos experiments can target resources in a different subscription than the experiment if the subscription is within the same Azure tenant. Chaos experiments can target resources in a different region than the experiment if the region is a supported region for Chaos Studio. ## Next steps-Now that you understand what a chaos experiment is you are ready to: +Now that you understand what a chaos experiment is you're ready to: + - [Learn about faults and actions](chaos-studio-faults-actions.md) - [Create and run your first experiment](chaos-studio-tutorial-service-direct-portal.md) |
chaos-studio | Chaos Studio Fault Providers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-providers.md | -The following are the supported resource types for faults, the target types, and suggested roles to use when giving an experiment permission to a resource of that type. +The following table lists the supported resource types for faults, the target types, and suggested roles to use when you give an experiment permission to a resource of that type. -| Resource Type | Target name/type | Suggested role assignment | +| Resource type | Target name/type | Suggested role assignment | |-|--|-| | Microsoft.Cache/Redis (service-direct) | Microsoft-AzureCacheForRedis | Redis Cache Contributor | | Microsoft.ClassicCompute/domainNames (service-direct) | Microsoft-DomainNames | Classic Virtual Machine Contributor | The following are the supported resource types for faults, the target types, and | Microsoft.Compute/virtualMachines (service-direct) | Microsoft-VirtualMachine | Virtual Machine Contributor | | Microsoft.Compute/virtualMachineScaleSets (service-direct) | Microsoft-VirtualMachineScaleSet | Virtual Machine Contributor | | Microsoft.ContainerService/managedClusters (service-direct) | Microsoft-AzureKubernetesServiceChaosMesh | Azure Kubernetes Service Cluster Admin Role |-| Microsoft.DocumentDb/databaseAccounts (CosmosDB, service-direct) | Microsoft-CosmosDB | Cosmos DB Operator | +| Microsoft.DocumentDb/databaseAccounts (CosmosDB, service-direct) | Microsoft-CosmosDB | Azure Cosmos DB Operator | | Microsoft.Insights/autoscalesettings (service-direct) | Microsoft-AutoScaleSettings | Web Plan Contributor |-| Microsoft.KeyVault/vaults (service-direct) | Microsoft-KeyVault | Key Vault Contributor | +| Microsoft.KeyVault/vaults (service-direct) | Microsoft-KeyVault | Azure Key Vault Contributor | | Microsoft.Network/networkSecurityGroups (service-direct) | Microsoft-NetworkSecurityGroup | Network Contributor | |
chaos-studio | Chaos Studio Faults Actions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-faults-actions.md | -In Chaos Studio, every activity that happens as part of an experiment is called an **action** and the most common type of action is a **fault**. This article describes actions and faults and the properties of each. +In Azure Chaos Studio, every activity that happens as part of an experiment is called an *action*. The most common type of action is a *fault*. This article describes actions and faults and the properties of each. ## Experiment actions -An action is any activity that is orchestrated as part of a chaos experiment. Actions are organized into steps and branches, enabling actions to be run either sequentially or in parallel. Every action has the following properties: -* **Name**: The specific action that takes place. A name usually takes the form of a URN for the action, for example, `urn: -* **Type**: The way that the action executes. Actions can be either *continuous*, meaning that the action runs nonstop over a period of time (for example, applying CPU pressure for 10 minutes), or *discrete*, meaning that the action occurs only once (for example, rebooting a Redis Cache instance). +An action is any activity that's orchestrated as part of a chaos experiment. Actions are organized into steps and branches, enabling actions to run either sequentially or in parallel. Every action has the following properties: ++* **Name**: The specific action that takes place. A name usually takes the form of a URN for the action, for example, `urn`. +* **Type**: The way that the action executes. Actions can be either *continuous* or *discrete*. A continuous action runs nonstop over a period of time. An example is applying CPU pressure for 10 minutes. A discrete action occurs only once. An example is rebooting an Azure Cache for Redis instance. ## Types of actions There are two varieties of actions in Chaos Studio:-- **Faults** - This action causes a disruption in one or more resources.-- **Time delays** - This action "waits" without impacting any resources. It is useful for pausing in between faults to wait for a system to be impacted by the previous fault.++- **Faults**: This action causes a disruption in one or more resources. +- **Time delays**: This action "waits" without affecting any resources. It's useful for pausing in between faults to wait for a system to be affected by the previous fault. ## Faults -Faults are the most common action in Chaos Studio. Faults cause a disruption in a system, allowing you to verify that the system effectively handles that disruption without impacting availability. Faults can be destructive (for example, killing a process), apply pressure (for example, adding virtual memory pressure), add latency, or cause a configuration change. In addition to a name and type, faults may also have a *duration*, if continuous, and *parameters*. Parameters describe how the fault should be applied and are specific to the fault name. For example, a parameter for the Azure Cosmos DB failover fault is the read region that will be promoted to the write region during the write region failure. Some parameters are required while others are optional. +Faults are the most common action in Chaos Studio. Faults cause a disruption in a system, allowing you to verify that the system effectively handles that disruption without affecting availability. ++Faults can: ++- Be destructive. For example, a fault can kill a process. +- Apply pressure. For example, a fault can add virtual memory pressure. +- Add latency. +- Cause a configuration change. ++In addition to a name and type, faults might also have a *duration*, if continuous, and *parameters*. Parameters describe how the fault should be applied and are specific to the fault name. For example, a parameter for the Azure Cosmos DB failover fault is the read region that will be promoted to the write region during the write region failure. Some parameters are required while others are optional. -Faults are either *agent-based* or *service-direct* depending on the target type. An agent-based fault requires the Chaos Studio agent to be installed on a virtual machine or virtual machine scale set. The agent is available for both Windows and Linux, but not all faults are available on both operating systems. See the [fault library](chaos-studio-fault-library.md) for information on which faults are supported on each operating system. Service-direct faults do not require any agent - they run directly against an Azure resource. +Faults are either *agent-based* or *service-direct* depending on the target type. An agent-based fault requires the Chaos Studio agent to be installed on a virtual machine or virtual machine scale set. The agent is available for both Windows and Linux, but not all faults are available on both operating systems. For information on which faults are supported on each operating system, see [Chaos Studio fault and action library](chaos-studio-fault-library.md). Service-direct faults don't require any agent. They run directly against an Azure resource. -Faults also include the name of the selector that describes the resources that the fault will run against. You can learn more about selectors [in the article about chaos experiments](chaos-studio-chaos-experiments.md). A fault can only impact a resource if the resource has been onboarded as a target and has the corresponding fault capability enabled on the resource. +Faults also include the name of the selector that describes the resources that the fault runs against. To learn more about selectors, see [Chaos experiments](chaos-studio-chaos-experiments.md). A fault can only affect a resource if the resource has been onboarded as a target and has the corresponding fault capability enabled on the resource. ## Next steps-Now that you understand actions and faults you are ready to: +Now that you understand actions and faults you're ready to: - [Create and run your first experiment](chaos-studio-tutorial-service-direct-portal.md) |
chaos-studio | Chaos Studio Limitations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-limitations.md | Title: Azure Chaos Studio limitations and known issues -description: Understand current limitations and known issues when using Azure Chaos Studio. +description: Understand current limitations and known issues when you use Azure Chaos Studio. -# Azure Chaos Studio Preview Limitations and Known Issues +# Azure Chaos Studio Preview limitations and known issues During the public preview of Azure Chaos Studio, there are a few limitations and known issues that the team is aware of and working to resolve. -## Limitations +## Limitations -* The target resources must be in [one of the regions supported by the Azure Chaos Studio Preview](https://azure.microsoft.com/global-infrastructure/services/?products=chaos-studio) +* The target resources must be in [one of the regions supported by the Azure Chaos Studio Preview](https://azure.microsoft.com/global-infrastructure/services/?products=chaos-studio). * For agent-based faults, the virtual machine must have outbound network access to the Chaos Studio agent service:- * Regional endpoints to allowlist are listed [in this article](chaos-studio-permissions-security.md#network-security). - * If sending telemetry data to Application Insights, the IPs [in this document](../azure-monitor/app/ip-addresses.md) are also required. -* If running an experiment that makes use of the Chaos Agent, the virtual machine must run one of the following **operating systems**: + * Regional endpoints to allowlist are listed in [Permissions and security in Azure Chaos Studio](chaos-studio-permissions-security.md#network-security). + * If you're sending telemetry data to Application Insights, the IPs in [IP addresses used by Azure Monitor](../azure-monitor/app/ip-addresses.md) are also required. +* If you run an experiment that makes use of the Chaos Studio agent, the virtual machine must run one of the following operating systems: + * Windows Server 2019, Windows Server 2016, Windows Server 2012, and Windows Server 2012 R2- * Redhat Enterprise Linux 8.2, SUSE Enterprise Linux 15 SP2, CentOS 8.2, Debian 10 Buster (with unzip installation required), Oracle Linux 7.8 Ubuntu Server 16.04 LTS, and Ubuntu Server 18.04 LTS -* The Chaos Agent is not tested against custom Linux distributions, hardened Linux distributions (for example, FIPS or SELinux) + * Red Hat Enterprise Linux 8.2, SUSE Enterprise Linux 15 SP2, CentOS 8.2, Debian 10 Buster (with unzip installation required), Oracle Linux 7.8, Ubuntu Server 16.04 LTS, and Ubuntu Server 18.04 LTS +* The Chaos Studio agent isn't tested against custom Linux distributions or hardened Linux distributions (for example, FIPS or SELinux). * The Chaos Studio portal experience has only been tested on the following browsers:- * **Windows:** Microsoft Edge, Google Chrome, Firefox - * **MacOS:** Safari, Google Chrome, Firefox + * **Windows:** Microsoft Edge, Google Chrome, and Firefox + * **MacOS:** Safari, Google Chrome, and Firefox ## Known issues-* When picking target resources for an agent-based fault in the experiment designer, it is possible to select virtual machines or virtual machine scale sets with an operating system not supported by the fault selected. -+When you pick target resources for an agent-based fault in the experiment designer, it's possible to select virtual machines or virtual machine scale sets with an operating system not supported by the fault selected. ## Next steps-Get started creating and running chaos experiments to improve application resilience with Chaos Studio using the links below. +Get started creating and running chaos experiments to improve application resilience with Chaos Studio by using the following links: - [Create and run your first experiment](chaos-studio-tutorial-service-direct-portal.md) - [Learn more about chaos engineering](chaos-studio-chaos-engineering-overview.md) |
chaos-studio | Chaos Studio Tutorial Agent Based Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-agent-based-cli.md | Title: Create an experiment that uses an agent-based fault with Azure Chaos Studio with the Azure CLI -description: Create an experiment that uses an agent-based fault and configure the chaos agent with the Azure CLI + Title: Create an experiment using an agent-based fault with Azure CLI +description: Create an experiment that uses an agent-based fault and configure the chaos agent with the Azure CLI. Last updated 11/10/2021-# Create a chaos experiment that uses an agent-based fault on a virtual machine or virtual machine scale set with the Azure CLI +# Create a chaos experiment that uses an agent-based fault with the Azure CLI -You can use a chaos experiment to verify that your application is resilient to failures by causing those failures in a controlled environment. In this guide, you will cause a high CPU event on a Linux virtual machine using a chaos experiment and Azure Chaos Studio. Running this experiment can help you defend against an application becoming resource-starved. --These same steps can be used to set up and run an experiment for any agent-based fault. An **agent-based** fault requires setup and installation of the chaos agent, unlike a service-direct fault, which runs directly against an Azure resource without any need for instrumentation. +You can use a chaos experiment to verify that your application is resilient to failures by causing those failures in a controlled environment. In this article, you cause a high CPU event on a Linux virtual machine (VM) by using a chaos experiment and Azure Chaos Studio Preview. Run this experiment to help you defend against an application from becoming resource starved. +You can use these same steps to set up and run an experiment for any agent-based fault. An *agent-based* fault requires setup and installation of the chaos agent. A service-direct fault runs directly against an Azure resource without any need for instrumentation. ## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] -- A virtual machine. If you do not have a virtual machine, you can [follow these steps to create one](../virtual-machines/linux/quick-create-portal.md).-- A network setup that permits you to [SSH into your virtual machine](../virtual-machines/ssh-keys-portal.md)-- A user-assigned managed identity. If you do not have a user-assigned managed identity, you can [follow these steps to create one](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md)+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] +- A virtual machine. If you don't have a VM, you can [create one](../virtual-machines/linux/quick-create-portal.md). +- A network setup that permits you to [SSH into your VM](../virtual-machines/ssh-keys-portal.md). +- A user-assigned managed identity. If you don't have a user-assigned managed identity, you can [create one](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md). -## Launch Azure Cloud Shell +## Open Azure Cloud Shell -The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. +Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. -To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also open Cloud Shell in a separate browser tab by going to [https://shell.azure.com/bash](https://shell.azure.com/bash). Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and select **Enter** to run it. +To open Cloud Shell, select **Try it** in the upper-right corner of a code block. You can also open Cloud Shell in a separate browser tab by going to [Bash](https://shell.azure.com/bash). Select **Copy** to copy the blocks of code, paste it into Cloud Shell, and select **Enter** to run it. If you prefer to install and use the CLI locally, this tutorial requires Azure CLI version 2.0.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli). > [!NOTE]-> These instructions use a Bash terminal in Azure Cloud Shell. Some commands may not work as described if running the CLI locally or in a PowerShell terminal. +> These instructions use a Bash terminal in Cloud Shell. Some commands might not work as described if you run the CLI locally or in a PowerShell terminal. ## Assign managed identity to the virtual machine -Before setting up Chaos Studio on the virtual machine, you need to assign a user-assigned managed identity to each virtual machine and/or virtual machine scale set where you plan to install the agent by using the `az vm identity assign` or `az vmss identity assign` command. Replace `$VM_RESOURCE_ID`/`$VMSS_RESOURCE_ID` with the resource ID of the VM you are adding as a chaos target and `$MANAGED_IDENTITY_RESOURCE_ID` with the resource ID of the user-assigned managed identity. +Before you set up Chaos Studio on the VM, assign a user-assigned managed identity to each VM or virtual machine scale set where you plan to install the agent. Use the `az vm identity assign` or `az vmss identity assign` command. Replace `$VM_RESOURCE_ID`/`$VMSS_RESOURCE_ID` with the resource ID of the VM you're adding as a chaos target. Replace `$MANAGED_IDENTITY_RESOURCE_ID` with the resource ID of the user-assigned managed identity. -**Virtual Machine** +Virtual machine ```azurecli-interactive az vm identity assign --ids $VM_RESOURCE_ID --identities $MANAGED_IDENTITY_RESOURCE_ID ``` -**Virtual Machine Scale Set** +Virtual machine scale set ```azurecli-interactive az vmss identity assign --ids $VMSS_RESOURCE_ID --identities $MANAGED_IDENTITY_RESOURCE_ID az vmss identity assign --ids $VMSS_RESOURCE_ID --identities $MANAGED_IDENTITY_R ## Enable Chaos Studio on your virtual machine -Chaos Studio cannot inject faults against a virtual machine unless that virtual machine has been onboarded to Chaos Studio first. You onboard a virtual machine to Chaos Studio by creating a [target and capabilities](chaos-studio-targets-capabilities.md) on the resource, then installing the chaos agent. Virtual machines have two target types - one that enables service-direct faults (where no agent is required), and one that enabled agent-based faults (which requires the installation of an agent). The chaos agent is an application installed on your virtual machine as a [virtual machine extension](../virtual-machines/extensions/overview.md) that allows you to inject faults in the guest operating system. +Chaos Studio can't inject faults against a VM unless that VM was added to Chaos Studio first. To add a VM to Chaos Studio, create a [target and capabilities](chaos-studio-targets-capabilities.md) on the resource. Then you install the chaos agent. ++Virtual machines have two target types. One target type enables service-direct faults (where no agent is required). The other target type enables agent-based faults (which requires the installation of an agent). The chaos agent is an application installed on your VM as a [VM extension](../virtual-machines/extensions/overview.md). You use it to inject faults in the guest operating system. ### Install stress-ng (Linux only) -The Chaos Studio agent for Linux requires stress-ng, an open-source application that can cause various stress events on a virtual machine. You can install stress-ng by [connecting to your Linux virtual machine](../virtual-machines/ssh-keys-portal.md) and running the appropriate installation command for your package manager, for example: +The Chaos Studio agent for Linux requires stress-ng. This open-source application can cause various stress events on a VM. To install stress-ng, [connect to your Linux VM](../virtual-machines/ssh-keys-portal.md). Then run the appropriate installation command for your package manager. For example: ```bash sudo apt-get update && sudo apt-get -y install unzip && sudo apt-get -y install stress-ng Or: sudo dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm && sudo yum -y install stress-ng ``` -### Enable chaos target and capabilities +### Enable the chaos target and capabilities -Next, set up a Microsoft-Agent target on each virtual machine or virtual machine scale set that specifies the user-assigned managed identity that the agent will use to connect to Chaos Studio. In this example, we use one managed identity for all VMs. A target must be created via REST API. In this example, we use the `az rest` CLI command to execute the REST API calls. +Next, set up a Microsoft-Agent target on each VM or virtual machine scale set that specifies the user-assigned managed identity that the agent uses to connect to Chaos Studio. In this example, we use one managed identity for all VMs. A target must be created via REST API. In this example, we use the `az rest` CLI command to execute the REST API calls. -1. Modify the following JSON by replacing `$USER_IDENTITY_CLIENT_ID` with the clientID of your managed identity, which you can find in the Azure portal overview of the user-assigned managed identity you created, and `$USER_IDENTITY_TENANT_ID` with your Azure tenant ID, which you can find in the Azure portal under **Azure Active Directory** under **Tenant information**. Save the JSON as a file in the same location where you are running the Azure CLI (in Cloud Shell you can drag-and-drop the JSON file to upload it). +1. Modify the following JSON by replacing `$USER_IDENTITY_CLIENT_ID` with the client ID of your managed identity. You can find the client ID in the Azure portal overview of the user-assigned managed identity you created. Replace `$USER_IDENTITY_TENANT_ID` with your Azure tenant ID. You can find it in the Azure portal under **Azure Active Directory** under **Tenant information**. Save the JSON as a file in the same location where you're running the Azure CLI. In Cloud Shell, you can drag and drop the JSON file to upload it. ```json { Next, set up a Microsoft-Agent target on each virtual machine or virtual machine } ``` -2. Create the target by replacing `$RESOURCE_ID` with the resource ID of the target virtual machine or virtual machine scale set. Replace `target.json` with the name of the JSON file you created in the previous step. +1. Create the target by replacing `$RESOURCE_ID` with the resource ID of the target VM or virtual machine scale set. Replace `target.json` with the name of the JSON file you created in the previous step. ```azurecli-interactive az rest --method put --uri https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/Microsoft-Agent?api-version=2021-09-15-preview --body @target.json --query properties.agentProfileId -o tsv ``` -3. Copy down the GUID for the **agentProfileId** returned by this command for use in a later step. +1. Copy down the GUID for the **agentProfileId** returned by this command for use in a later step. -4. Create the capabilities by replacing `$RESOURCE_ID` with the resource ID of the target virtual machine or virtual machine scale set and `$CAPABILITY` with the [name of the fault capability you are enabling](chaos-studio-fault-library.md). +1. Create the capabilities by replacing `$RESOURCE_ID` with the resource ID of the target VM or virtual machine scale set. Replace `$CAPABILITY` with the [name of the fault capability you're enabling](chaos-studio-fault-library.md). ```azurecli-interactive az rest --method put --url "https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/Microsoft-Agent/capabilities/$CAPABILITY?api-version=2021-09-15-preview" --body "{\"properties\":{}}" ``` - For example, if enabling the CPU Pressure capability: + For example, if you're enabling the CPU Pressure capability: ```azurecli-interactive az rest --method put --url "https://management.azure.com/subscriptions/b65f2fec-d6b2-4edd-817e-9339d8c01dc4/resourceGroups/myRG/providers/Microsoft.Compute/virtualMachines/myVM/providers/Microsoft.Chaos/targets/Microsoft-Agent/capabilities/CPUPressure-1.0?api-version=2021-09-15-preview" --body "{\"properties\":{}}" Next, set up a Microsoft-Agent target on each virtual machine or virtual machine ### Install the Chaos Studio virtual machine extension -The chaos agent is an application that runs in your virtual machine or virtual machine scale set instances to execute agent-based faults. During installation, you configure the agent with the managed identity the agent should use to authenticate to Chaos Studio, the profile ID of the Microsoft-Agent target that you created, and optionally an Application Insights instrumentation key that enables the agent to send diagnostic events to Azure Application Insights. +The chaos agent is an application that runs in your VM or virtual machine scale set instances to execute agent-based faults. During installation, you configure: -1. Before beginning, make sure you have the following details: - * **agentProfileId** - the property returned when creating the target. If you don't have this property, you can run `az rest --method get --uri https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/Microsoft-Agent?api-version=2021-09-15-preview` and copy the `agentProfileId` property. - * **ClientId** - the client ID of the user-assigned managed identity used in the target. If you don't have this property, you can run `az rest --method get --uri https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/Microsoft-Agent?api-version=2021-09-15-preview` and copy the `clientId` property - * (optionally) **AppInsightsKey** - the instrumentation key for your Application Insights component, which you can find in the Application Insights page in the portal under **Essentials**. +- The agent with the managed identity that the agent should use to authenticate to Chaos Studio. +- The profile ID of the Microsoft-Agent target that you created. +- Optionally, an Application Insights instrumentation key that enables the agent to send diagnostic events to Application Insights. -2. Install the Chaos Studio VM extension. Replace `$VM_RESOURCE_ID` with the resource ID of your VM or replace `$SUBSCRIPTION_ID`, `$RESOURCE_GROUP`, and `$VMSS_NAME` with those properties for your virtual machine scale set. Replace `$AGENT_PROFILE_ID` with the agentProfileId, `$USER_IDENTITY_CLIENT_ID` with the clientID of your managed identity, and `$APP_INSIGHTS_KEY` with your Application Insights instrumentation key. If you are not using Application Insights, remove that key/value pair. +1. Before you begin, make sure you have the following details: + * **agentProfileId**: The property returned when you create the target. If you don't have this property, you can run `az rest --method get --uri https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/Microsoft-Agent?api-version=2021-09-15-preview` and copy the `agentProfileId` property. + * **ClientId**: The client ID of the user-assigned managed identity used in the target. If you don't have this property, you can run `az rest --method get --uri https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/Microsoft-Agent?api-version=2021-09-15-preview` and copy the `clientId` property. + * **(Optionally) AppInsightsKey**: The instrumentation key for your Application Insights component, which you can find on the Application Insights page in the portal under **Essentials**. ++1. Install the Chaos Studio VM extension. Replace `$VM_RESOURCE_ID` with the resource ID of your VM or replace `$SUBSCRIPTION_ID`, `$RESOURCE_GROUP`, and `$VMSS_NAME` with those properties for your virtual machine scale set. Replace `$AGENT_PROFILE_ID` with the agent Profile ID. Replace `$USER_IDENTITY_CLIENT_ID` with the client ID of your managed identity. Replace `$APP_INSIGHTS_KEY` with your Application Insights instrumentation key. If you aren't using Application Insights, remove that key/value pair. #### Install the agent on a virtual machine - **Windows** + Windows ```azurecli-interactive az vm extension set --ids $VM_RESOURCE_ID --name ChaosWindowsAgent --publisher Microsoft.Azure.Chaos --version 1.0 --settings '{"profile": "$AGENT_PROFILE_ID", "auth.msi.clientid":"$USER_IDENTITY_CLIENT_ID", "appinsightskey":"$APP_INSIGHTS_KEY"}' ``` - **Linux** + Linux ```azurecli-interactive az vm extension set --ids $VM_RESOURCE_ID --name ChaosLinuxAgent --publisher Microsoft.Azure.Chaos --version 1.0 --settings '{"profile": "$AGENT_PROFILE_ID", "auth.msi.clientid":"$USER_IDENTITY_CLIENT_ID", "appinsightskey":"$APP_INSIGHTS_KEY"}' The chaos agent is an application that runs in your virtual machine or virtual m #### Install the agent on a virtual machine scale set - **Windows** + Windows ```azurecli-interactive az vmss extension set --subscription $SUBSCRIPTION_ID --resource-group $RESOURCE_GROUP --vmss-name $VMSS_NAME --name ChaosWindowsAgent --publisher Microsoft.Azure.Chaos --version 1.0 --settings '{"profile": "$AGENT_PROFILE_ID", "auth.msi.clientid":"$USER_IDENTITY_CLIENT_ID", "appinsightskey":"$APP_INSIGHTS_KEY"}' ``` - **Linux** + Linux ```azurecli-interactive az vmss extension set --subscription $SUBSCRIPTION_ID --resource-group $RESOURCE_GROUP --vmss-name $VMSS_NAME --name ChaosLinuxAgent --publisher Microsoft.Azure.Chaos --version 1.0 --settings '{"profile": "$AGENT_PROFILE_ID", "auth.msi.clientid":"$USER_IDENTITY_CLIENT_ID", "appinsightskey":"$APP_INSIGHTS_KEY"}' ```-3. If setting up a virtual machine scale set, verify that the instances have been upgraded to the latest model. If needed, upgrade all instances in the model. +1. If you're setting up a virtual machine scale set, verify that the instances were upgraded to the latest model. If needed, upgrade all instances in the model. ```azurecli-interactive az vmss update-instances -g $RESOURCE_GROUP -n $VMSS_NAME --instance-ids * The chaos agent is an application that runs in your virtual machine or virtual m ## Create an experiment -With your virtual machine now onboarded, you can create your experiment. A chaos experiment defines the actions you want to take against target resources, organized into steps, which run sequentially, and branches, which run in parallel. +After you've successfully deployed your VM, now you can create your experiment. A chaos experiment defines the actions you want to take against target resources. The actions are organized and run in sequential steps. The chaos experiment also defines the actions you want to take against branches, which run in parallel. -1. Formulate your experiment JSON starting with the JSON sample below. Modify the JSON to correspond to the experiment you want to run using the [Create Experiment API](/rest/api/chaosstudio/experiments/create-or-update) and the [fault library](chaos-studio-fault-library.md) +1. Formulate your experiment JSON starting with the following JSON sample. Modify the JSON to correspond to the experiment you want to run by using the [Create Experiment API](/rest/api/chaosstudio/experiments/create-or-update) and the [fault library](chaos-studio-fault-library.md). ```json { With your virtual machine now onboarded, you can create your experiment. A chaos } ``` - If running against a virtual machine scale set, modify the fault parameters to include the instance number(s) to target: + If you're running against a virtual machine scale set, modify the fault parameters to include the instance numbers to target: ```json "parameters": [ With your virtual machine now onboarded, you can create your experiment. A chaos ] ``` - You can identify scale set instance numbers in the Azure portal by navigating to your virtual machine scale set and clicking on **Instances**. The instance name will end in the instance number. + You can identify scale set instance numbers in the Azure portal by going to your virtual machine scale set and selecting **Instances**. The instance name ends in the instance number. -2. Create the experiment using the Azure CLI, replacing `$SUBSCRIPTION_ID`, `$RESOURCE_GROUP`, and `$EXPERIMENT_NAME` with the properties for your experiment. Make sure you have saved and uploaded your experiment JSON and update `experiment.json` with your JSON filename. +1. Create the experiment by using the Azure CLI. Replace `$SUBSCRIPTION_ID`, `$RESOURCE_GROUP`, and `$EXPERIMENT_NAME` with the properties for your experiment. Make sure you've saved and uploaded your experiment JSON. Update `experiment.json` with your JSON filename. ```azurecli-interactive az rest --method put --uri https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Chaos/experiments/$EXPERIMENT_NAME?api-version=2021-09-15-preview --body @experiment.json ``` - Each experiment creates a corresponding system-assigned managed identity. Note of the `principalId` for this identity in the response for the next step. + Each experiment creates a corresponding system-assigned managed identity. Note the principal ID for this identity in the response for the next step. -## Give experiment permission to your virtual machine -When you create a chaos experiment, Chaos Studio creates a system-assigned managed identity that executes faults against your target resources. This identity must be given [appropriate permissions](chaos-studio-fault-providers.md) to the target resource for the experiment to run successfully. The Reader role is required for agent-based faults. Other roles that do not have */Read permission, such as Virtual Machine Contributor, will not grant appropriate permission for agent-based faults. +## Give the experiment permission to your virtual machine +When you create a chaos experiment, Chaos Studio creates a system-assigned managed identity that executes faults against your target resources. This identity must be given [appropriate permissions](chaos-studio-fault-providers.md) to the target resource for the experiment to run successfully. The Reader role is required for agent-based faults. Other roles that don't have */Read permission, such as Virtual Machine Contributor, won't grant appropriate permission for agent-based faults. -Give the experiment access to your virtual machine or virtual machine scale set using the command below, replacing `$EXPERIMENT_PRINCIPAL_ID` with the principalId from the previous step and `$RESOURCE_ID` with the resource ID of the target virtual machine or virtual machine scale set (the resource ID of the VM, not the resource ID of the chaos agent used in the experiment definition). Run this command for each virtual machine or virtual machine scale set targeted in your experiment. +Give the experiment access to your VM or virtual machine scale set by using the following command. Replace `$EXPERIMENT_PRINCIPAL_ID` with the principal ID from the previous step. Replace `$RESOURCE_ID` with the resource ID of the target VM or virtual machine scale set. Be sure to use the resource ID of the VM, not the resource ID of the chaos agent used in the experiment definition. Run this command for each VM or virtual machine scale set targeted in your experiment. ```azurecli-interactive az role assignment create --role "Reader" --assignee-principal-type "ServicePrincipal" --assignee-object-id $EXPERIMENT_PRINCIPAL_ID --scope $RESOURCE_ID ``` - ## Run your experiment-You are now ready to run your experiment. To see the impact, we recommend opening [an Azure Monitor metrics chart](../azure-monitor/essentials/tutorial-metrics.md) with your virtual machine's CPU pressure in a separate browser tab. +You're now ready to run your experiment. To see the effect, we recommend that you open [an Azure Monitor metrics chart](../azure-monitor/essentials/tutorial-metrics.md) with your VM's CPU pressure in a separate browser tab. -1. Start the experiment using the Azure CLI, replacing `$SUBSCRIPTION_ID`, `$RESOURCE_GROUP`, and `$EXPERIMENT_NAME` with the properties for your experiment. +1. Start the experiment by using the Azure CLI. Replace `$SUBSCRIPTION_ID`, `$RESOURCE_GROUP`, and `$EXPERIMENT_NAME` with the properties for your experiment. ```azurecli-interactive az rest --method post --uri https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Chaos/experiments/$EXPERIMENT_NAME/start?api-version=2021-09-15-preview ``` -2. The response includes a status URL that you can use to query experiment status as the experiment runs. +1. The response includes a status URL that you can use to query experiment status as the experiment runs. ## Next steps-Now that you have run an agent-based experiment, you are ready to: +Now that you've run an agent-based experiment, you're ready to: - [Create an experiment that uses service-direct faults](chaos-studio-tutorial-service-direct-portal.md) - [Manage your experiment](chaos-studio-run-experiment.md) |
chaos-studio | Chaos Studio Tutorial Agent Based Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-agent-based-portal.md | Title: Create an experiment that uses an agent-based fault with Azure Chaos Studio with the portal -description: Create an experiment that uses an agent-based fault and configure the chaos agent with the portal + Title: Create an experiment using an agent-based fault with the portal +description: Create an experiment that uses an agent-based fault and configure the chaos agent with the portal. Last updated 11/01/2021-# Create a chaos experiment that uses an agent-based fault to add CPU pressure to a Linux VM with the Azure portal +# Create a chaos experiment that uses an agent-based fault with the Azure portal -You can use a chaos experiment to verify that your application is resilient to failures by causing those failures in a controlled environment. In this guide, you will cause a high CPU event on a Linux virtual machine using a chaos experiment and Azure Chaos Studio. Running this experiment can help you defend against an application becoming resource-starved. --These same steps can be used to set up and run an experiment for any agent-based fault. An **agent-based** fault requires setup and installation of the chaos agent, unlike a service-direct fault, which runs directly against an Azure resource without any need for instrumentation. +You can use a chaos experiment to verify that your application is resilient to failures by causing those failures in a controlled environment. In this article, you cause a high CPU event on a Linux virtual machine (VM) by using a chaos experiment and Azure Chaos Studio Preview. Running this experiment can help you defend against an application from becoming resource starved. +You can use these same steps to set up and run an experiment for any agent-based fault. An *agent-based* fault requires setup and installation of the chaos agent. A service-direct fault runs directly against an Azure resource without any need for instrumentation. ## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] -- A Linux virtual machine. If you do not have a virtual machine, you can [follow these steps to create one](../virtual-machines/linux/quick-create-portal.md).-- A network setup that permits you to [SSH into your virtual machine](../virtual-machines/ssh-keys-portal.md)-- A user-assigned managed identity **that has been assigned to the target virtual machine or virtual machine scale set**. If you do not have a user-assigned managed identity, you can [follow these steps to create one](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md)-+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] +- A Linux VM. If you don't have a VM, you can [create one](../virtual-machines/linux/quick-create-portal.md). +- A network setup that permits you to [SSH into your VM](../virtual-machines/ssh-keys-portal.md). +- A user-assigned managed identity *that was assigned to the target VM or virtual machine scale set*. If you don't have a user-assigned managed identity, you can [create one](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md). ## Enable Chaos Studio on your virtual machine -Chaos Studio cannot inject faults against a virtual machine unless that virtual machine has been onboarded to Chaos Studio first. You onboard a virtual machine to Chaos Studio by creating a [target and capabilities](chaos-studio-targets-capabilities.md) on the resource, then installing the chaos agent. Virtual machines have two target types - one that enables service-direct faults (where no agent is required), and one that enabled agent-based faults (which requires the installation of an agent). The chaos agent is an application installed on your virtual machine as a [virtual machine extension](../virtual-machines/extensions/overview.md) that allows you to inject faults in the guest operating system. +Chaos Studio can't inject faults against a VM unless that VM was added to Chaos Studio first. To add a VM to Chaos Studio, create a [target and capabilities](chaos-studio-targets-capabilities.md) on the resource. Then you install the chaos agent. ++Virtual machines have two target types. One target type enables service-direct faults (where no agent is required). Another target type enables agent-based faults (which requires the installation of an agent). The chaos agent is an application installed on your VM as a [VM extension](../virtual-machines/extensions/overview.md). You use it to inject faults in the guest operating system. ### Install stress-ng -The Chaos Studio agent for Linux requires stress-ng, an open-source application that can cause various stress events on a virtual machine. You can install stress-ng by [connecting to your Linux virtual machine](../virtual-machines/ssh-keys-portal.md) and running the appropriate installation command for your package manager, for example: +The Chaos Studio agent for Linux requires stress-ng. This open-source application can cause various stress events on a VM. To install stress-ng, [connect to your Linux VM](../virtual-machines/ssh-keys-portal.md). Then run the appropriate installation command for your package manager. For example: ```bash sudo apt-get update && sudo apt-get -y install unzip && sudo apt-get -y install stress-ng ``` -or +Or: ```bash sudo dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm && sudo yum -y install stress-ng ``` -### Enable chaos target, capabilities, and agent +### Enable the chaos target, capabilities, and agent > [!IMPORTANT]-> Prior to completing the steps below, you must [create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) and assign it to the target virtual machine or virtual machine scale set. +> Prior to finishing the next steps, you must [create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md). Then you assign it to the target VM or virtual machine scale set. 1. Open the [Azure portal](https://portal.azure.com).-2. Search for **Chaos Studio (preview)** in the search bar. -3. Click on **Targets** and navigate to your virtual machine. - -4. Check the box next to your virtual machine and click **Enable targets** then **Enable agent-based targets** from the dropdown menu. - -5. Select the **Managed Identity** that you will use to authenticate the chaos agent and optionally enable Application Insights to see experiment events and agent logs. - -6. Click **Review + Enable** then click **Enable**. - -7. After a few minutes, a notification will appear indicating that the resource(s) selected were successfully enabled. The Azure portal will add the user-assigned identity to the virtual machine, enable the agent target and capabilities, and install the chaos agent as a virtual machine extension. - -8. If enabling a virtual machine scale set, upgrade instances to the latest model by going to the virtual machine scale set resource blade, clicking **Instances**, then selecting all instances and clicking **Upgrade** if not on the latest model. --You have now successfully onboarded your Linux virtual machine to Chaos Studio. In the **Targets** view you can also manage the capabilities enabled on this resource. Clicking the **Manage actions** link next to a resource will display the capabilities enabled for that resource. +1. Search for **Chaos Studio (preview)** in the search bar. +1. Select **Targets** and move to your VM. ++  +1. Select the checkbox next to your VM and select **Enable targets**. Then select **Enable agent-based targets** from the dropdown menu. ++  +1. Select the **Managed Identity** to use to authenticate the chaos agent and optionally enable Application Insights to see experiment events and agent logs. ++  +1. Select **Review + Enable** > **Enable**. ++  +1. After a few minutes, a notification appears that indicates that the resources selected were successfully enabled. The Azure portal adds the user-assigned identity to the VM. The portal enables the agent target and capabilities and installs the chaos agent as a VM extension. ++  +1. If you're enabling a virtual machine scale set, upgrade instances to the latest model by going to the virtual machine scale set resource pane. Select **Instances**, and then select all instances. Select **Upgrade** if you're not on the latest model. ++You've now successfully added your Linux VM to Chaos Studio. In the **Targets** view, you can also manage the capabilities enabled on this resource. Select the **Manage actions** link next to a resource to display the capabilities enabled for that resource. ## Create an experiment-With your virtual machine now onboarded, you can create your experiment. A chaos experiment defines the actions you want to take against target resources, organized into steps, which run sequentially, and branches, which run in parallel. --1. Click on the **Experiments** tab in the Chaos Studio navigation. In this view, you can see and manage all of your chaos experiments. Click on **Add an experiment** - -2. Fill in the **Subscription**, **Resource Group**, and **Location** where you want to deploy the chaos experiment. Give your experiment a **Name**. Click **Next : Experiment designer >** - -3. You are now in the Chaos Studio experiment designer. The experiment designer allows you to build your experiment by adding steps, branches, and faults. Give a friendly name to your **Step** and **Branch**, then click **Add fault**. - -4. Select **CPU Pressure** from the dropdown, then fill in the **Duration** with the number of minutes to apply pressure and **pressureLevel** with the amount of CPU pressure to apply. Leave **virtualMachineScaleSetInstances** blank. Click **Next: Target resources >** - -5. Select your virtual machine, and click **Next** - -6. Verify that your experiment looks correct, then click **Review + create**, then **Create.** - --## Give experiment permission to your virtual machine +Now you can create your experiment. A chaos experiment defines the actions you want to take against target resources. The actions are organized and run in sequential steps. The chaos experiment also defines the actions you want to take against branches, which run in parallel. ++1. Select the **Experiments** tab in Chaos Studio. In this view, you can see and manage all your chaos experiments. Select **Add an experiment**. ++  +1. Fill in the **Subscription**, **Resource Group**, and **Location** where you want to deploy the chaos experiment. Give your experiment a name. Select **Next: Experiment designer**. ++  +1. You're now in the Chaos Studio experiment designer. You can build your experiment by adding steps, branches, and faults. Give a friendly name to your **Step** and **Branch**. Then select **Add fault**. ++  +1. Select **CPU Pressure** from the dropdown list. Fill in **Duration** with the number of minutes to apply pressure. Fill in **pressureLevel** with the amount of CPU pressure to apply. Leave **virtualMachineScaleSetInstances** blank. Select **Next: Target resources**. ++  +1. Select your VM and select **Next**. ++  +1. Verify that your experiment looks correct. Then select **Review + create** > **Create**. ++  ++## Give the experiment permission to your virtual machine When you create a chaos experiment, Chaos Studio creates a system-assigned managed identity that executes faults against your target resources. This identity must be given [appropriate permissions](chaos-studio-fault-providers.md) to the target resource for the experiment to run successfully. -1. Navigate to your virtual machine and click on **Access control (IAM)**. - -2. Click **Add** then click **Add role assignment**. - -3. Search for **Reader** and select the role. Click **Next** - -4. Click **Select members** and search for your experiment name. Select your experiment and click **Select**. If there are multiple experiments in the same tenant with the same name, your experiment name will be truncated with random characters added. - -5. Click **Review + assign** then **Review + assign**. +1. Go to your VM and select **Access control (IAM)**. ++  +1. Select **Add** > **Add role assignment**. ++  +1. Search for **Reader** and select the role. Select **Next**. ++  +1. Choose **Select members** and search for your experiment name. Select your experiment and choose **Select**. If there are multiple experiments in the same tenant with the same name, your experiment name is truncated with random characters added. ++  +1. Select **Review + assign** > **Review + assign**. ## Run your experiment-You are now ready to run your experiment. To see the impact, we recommend opening [an Azure Monitor metrics chart](../azure-monitor/essentials/tutorial-metrics.md) with your virtual machine's CPU pressure in a separate browser tab. +You're now ready to run your experiment. To see the impact, we recommend that you open an [Azure Monitor metrics chart](../azure-monitor/essentials/tutorial-metrics.md) with your VM's CPU pressure in a separate browser tab. ++1. In the **Experiments** view, select your experiment. Select **Start** > **OK**. -1. In the **Experiments** view, click on your experiment, and click **Start**, then click **OK**. - -2. When the **Status** changes to **Running**, click **Details** for the latest run under **History** to see details for the running experiment. +  +1. After the **Status** changes to *Running*, under **History**, select **Details** for the latest run to see details for the running experiment. ## Next steps-Now that you have run an agent-based experiment, you are ready to: +Now that you've run an agent-based experiment, you're ready to: - [Create an experiment that uses service-direct faults](chaos-studio-tutorial-service-direct-portal.md) - [Manage your experiment](chaos-studio-run-experiment.md) |
chaos-studio | Chaos Studio Tutorial Dynamic Target Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-dynamic-target-cli.md | Title: Create a chaos experiment that uses dynamic targeting to select hosts -description: Create an experiment that uses dynamic targeting with the Azure CLI +description: Create an experiment that uses dynamic targeting with the Azure CLI. ms.devlang: azurecli # Create a chaos experiment that uses dynamic targeting to select hosts -You can use dynamic targeting in a chaos experiment to choose a set of targets to run an experiment against. In this guide, we'll show you how to dynamically target a Virtual Machine Scale Set to shut down based on availability zone. Running this experiment can help you test failover to a Virtual Machine Scale Sets instance in a different region in case of an outage. +You can use dynamic targeting in a chaos experiment to choose a set of targets to run an experiment against. In this article, we show you how to dynamically target virtual machine scale sets to shut down based on availability zone. Running this experiment can help you test failover to an Azure Virtual Machine Scale Sets instance in a different region if there's an outage. -These same steps can be used to set up and run an experiment for any fault that supports dynamic targeting. Currently, only Virtual Machine Scale Sets shutdown supports dynamic targeting. +You can use these same steps to set up and run an experiment for any fault that supports dynamic targeting. Currently, only virtual machine scale set shutdown supports dynamic targeting. ## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] -- An Azure Virtual Machine Scale Sets instance- -## Launch Azure Cloud Shell +- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] +- An Azure Virtual Machine Scale Sets instance. -The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. +## Open Azure Cloud Shell -To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also open Cloud Shell in a separate browser tab by going to [https://shell.azure.com/bash](https://shell.azure.com/bash). Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and select **Enter** to run it. +Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. -If you prefer to install and use the CLI locally, this tutorial requires Azure CLI version 2.0.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli). +To open Cloud Shell, select **Try it** in the upper-right corner of a code block. You can also open Cloud Shell in a separate browser tab by going to [Bash](https://shell.azure.com/bash). Select **Copy** to copy the blocks of code, paste it into Cloud Shell, and select **Enter** to run it. ++If you want to install and use the CLI locally, this tutorial requires Azure CLI version 2.0.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli). > [!NOTE]-> These instructions use a Bash terminal in Azure Cloud Shell. Some commands may not work as described if running the CLI locally or in a PowerShell terminal. +> These instructions use a Bash terminal in Cloud Shell. Some commands might not work as described if you're running the CLI locally or in a PowerShell terminal. ## Enable Chaos Studio on your Virtual Machine Scale Sets instance -Chaos Studio can't inject faults against a resource unless that resource has been onboarded to Chaos Studio first. You onboard a resource to Chaos Studio by creating a [target and capabilities](chaos-studio-targets-capabilities.md) on the resource. Virtual Machine Scale Sets only has one target type (Microsoft-VirtualMachineScaleSet) and one capability (shutdown), but other resources may have up to two target types - one for service-direct faults and one for agent-based faults - and many capabilities. +Azure Chaos Studio Preview can't inject faults against a resource unless that resource was added to Chaos Studio first. To add a resource to Chaos Studio, create a [target and capabilities](chaos-studio-targets-capabilities.md) on the resource. ++Virtual Machine Scale Sets has only one target type (`Microsoft-VirtualMachineScaleSet`) and one capability (`shutdown`). Other resources might have up to two target types. One target type is for service-direct faults. Another target type is for agent-based faults. Other resources also might have many other capabilities. -1. Create a [target for your Virtual Machine Scale Sets](chaos-studio-fault-providers.md) resource by replacing `$RESOURCE_ID` with the resource ID of the Virtual Machine Scale Set you're onboarding: +1. Create a [target for your virtual machine scale set](chaos-studio-fault-providers.md) resource. Replace `$RESOURCE_ID` with the resource ID of the virtual machine scale set you're adding: ```azurecli-interactive az rest --method put --url "https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/Microsoft-VirtualMachineScaleSet?api-version=2022-10-01-preview" --body "{\"properties\":{}}" ``` -2. Create the capabilities on the Virtual Machine Scale Sets target by replacing `$RESOURCE_ID` with the resource ID of the resource you're onboarding, specifying The `VirtualMachineScaleSet` target and the `Shutdown-2.0` capability. +1. Create the capabilities on the virtual machine scale set target. Replace `$RESOURCE_ID` with the resource ID of the resource you're adding. Specify the `VirtualMachineScaleSet` target and the `Shutdown-2.0` capability. ```azurecli-interactive az rest --method put --url "https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/Microsoft-VirtualMachineScaleSet/capabilities/Shutdown-2.0?api-version=2022-10-01-preview" --body "{\"properties\":{}}" ``` -You've now successfully onboarded your Virtual Machine Scale Set to Chaos Studio. +You've now successfully added your virtual machine scale set to Chaos Studio. ## Create an experiment -With your Virtual Machine Scale Sets now onboarded, you can create your experiment. A chaos experiment defines the actions you want to take against target resources, organized into steps, which run sequentially, and branches, which run in parallel. +Now you can create your experiment. A chaos experiment defines the actions you want to take against target resources. The actions are organized and run in sequential steps. The chaos experiment also defines the actions you want to take against branches, which run in parallel. -1. Formulate your experiment JSON starting with the following [Virtual Machine Scale Sets shutdown 2.0](chaos-studio-fault-library.md#version-20) JSON sample. Modify the JSON to correspond to the experiment you want to run using the [Create Experiment API](/rest/api/chaosstudio/experiments/create-or-update) and the [fault library](chaos-studio-fault-library.md). At this time dynamic targeting is only available with the Virtual Machine Scale Set Shutdown 2.0 fault, and can only filter on availability zones. +1. Formulate your experiment JSON starting with the following [Virtual Machine Scale Sets Shutdown 2.0](chaos-studio-fault-library.md#version-20) JSON sample. Modify the JSON to correspond to the experiment you want to run by using the [Create Experiment API](/rest/api/chaosstudio/experiments/create-or-update) and the [fault library](chaos-studio-fault-library.md). At this time, dynamic targeting is only available with the Virtual Machine Scale Sets Shutdown 2.0 fault and can only filter on availability zones. - - Use the `filter` element to configure the list of Azure availability zones to filter targets by. If you don't provide a `filter`, the fault will shut down all instances in the Virtual Machine Scale Set. - - The experiment will target all Virtual Machine Scale Sets instances in the specified zones. + - Use the `filter` element to configure the list of Azure availability zones to filter targets by. If you don't provide a `filter`, the fault shuts down all instances in the virtual machine scale set. + - The experiment targets all Virtual Machine Scale Sets instances in the specified zones. ```json { With your Virtual Machine Scale Sets now onboarded, you can create your experime } ``` -2. Create the experiment using the Azure CLI, replacing `$SUBSCRIPTION_ID`, `$RESOURCE_GROUP`, and `$EXPERIMENT_NAME` with the properties for your experiment. Make sure you've saved and uploaded your experiment JSON and update `experiment.json` with your JSON filename. +1. Create the experiment by using the Azure CLI. Replace `$SUBSCRIPTION_ID`, `$RESOURCE_GROUP`, and `$EXPERIMENT_NAME` with the properties for your experiment. Make sure that you saved and uploaded your experiment JSON. Update `experiment.json` with your JSON filename. ```azurecli-interactive az rest --method put --uri https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Chaos/experiments/$EXPERIMENT_NAME?api-version=2022-10-01-preview --body @experiment.json ``` - Each experiment creates a corresponding system-assigned managed identity. Note of the `principalId` for this identity in the response for the next step. + Each experiment creates a corresponding system-assigned managed identity. Note the principal ID for this identity in the response for the next step. -## Give experiment permission to your Virtual Machine Scale Sets +## Give experiment permission to your virtual machine scale sets When you create a chaos experiment, Chaos Studio creates a system-assigned managed identity that executes faults against your target resources. This identity must be given [appropriate permissions](chaos-studio-fault-providers.md) to the target resource for the experiment to run successfully. -Give the experiment access to your resource(s) using the following command, replacing `$EXPERIMENT_PRINCIPAL_ID` with the principalId from the previous step and `$RESOURCE_ID` with the resource ID of the target resource. Change the role to the appropriate [built-in role for that resource type](chaos-studio-fault-providers.md). Run this command for each resource targeted in your experiment. +Give the experiment access to your resources by using the following command. Replace `$EXPERIMENT_PRINCIPAL_ID` with the principal ID from the previous step. Replace `$RESOURCE_ID` with the resource ID of the target resource. Change the role to the appropriate [built-in role for that resource type](chaos-studio-fault-providers.md). Run this command for each resource targeted in your experiment. ```azurecli-interactive az role assignment create --role "Virtual Machine Contributor" --assignee-object-id $EXPERIMENT_PRINCIPAL_ID --scope $RESOURCE_ID az role assignment create --role "Virtual Machine Contributor" --assignee-object ## Run your experiment -You're now ready to run your experiment. To see the impact, check the portal to view if your Virtual Machine Scale Sets targets are shut down. If they're shut down, check to see that the services running on your Virtual Machine Scale Sets are still running as expected. +You're now ready to run your experiment. To see the effect, check the portal to see if your virtual machine scale sets targets are shut down. If they're shut down, check to see that the services running on your virtual machine scale sets are still running as expected. -1. Start the experiment using the Azure CLI, replacing `$SUBSCRIPTION_ID`, `$RESOURCE_GROUP`, and `$EXPERIMENT_NAME` with the properties for your experiment. +1. Start the experiment by using the Azure CLI. Replace `$SUBSCRIPTION_ID`, `$RESOURCE_GROUP`, and `$EXPERIMENT_NAME` with the properties for your experiment. ```azurecli-interactive az rest --method post --uri https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Chaos/experiments/$EXPERIMENT_NAME/start?api-version=2022-10-01-preview ``` -2. The response includes a status URL that you can use to query experiment status as the experiment runs. +1. The response includes a status URL that you can use to query experiment status as the experiment runs. ## Next steps-Now that you've run a dynamically targeted Virtual Machine Scale Sets shutdown experiment, you're ready to: +Now that you've run a dynamically targeted virtual machine scale set shutdown experiment, you're ready to: - [Create an experiment that uses agent-based faults](chaos-studio-tutorial-agent-based-portal.md)-- [Manage your experiment](chaos-studio-run-experiment.md)-+- [Manage your experiment](chaos-studio-run-experiment.md) |
chaos-studio | Chaos Studio Tutorial Dynamic Target Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-dynamic-target-portal.md | Title: Create a chaos experiment to shut down all targets in a zone -description: Use the Azure portal to create an experiment that uses dynamic targeting to select hosts in a zone +description: Use the Azure portal to create an experiment that uses dynamic targeting to select hosts in a zone. -You can use dynamic targeting in a chaos experiment to choose a set of targets to run an experiment against, based on criteria evaluated at experiment runtime. This guide shows how you can dynamically target a Virtual Machine Scale Set to shut down instances based on availability zone. Running this experiment can help you test failover to a Virtual Machine Scale Sets instance in a different region if there's an outage. +You can use dynamic targeting in a chaos experiment to choose a set of targets to run an experiment against, based on criteria evaluated at experiment runtime. This article shows how you can dynamically target a virtual machine scale set to shut down instances based on availability zone. Running this experiment can help you test failover to an Azure Virtual Machine Scale Sets instance in a different region if there's an outage. -These same steps can be used to set up and run an experiment for any fault that supports dynamic targeting. Currently, only Virtual Machine Scale Sets shutdown supports dynamic targeting. +You can use these same steps to set up and run an experiment for any fault that supports dynamic targeting. Currently, only virtual machine scale set shutdown supports dynamic targeting. ## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] +- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - An Azure Virtual Machine Scale Sets instance.- -## Enable Chaos Studio on your Virtual Machine Scale Sets -Chaos Studio can't inject faults against a resource until that resource has been onboarded to Chaos Studio. To onboard a resource to Chaos Studio, create a [target and capabilities](chaos-studio-targets-capabilities.md) on the resource. Virtual Machine Scale Sets only has one target type (`Microsoft-VirtualMachineScaleSet`) and one capability (`shutdown`), but other resources may have up to two target types (one for service-direct faults and one for agent-based faults) and many capabilities. +## Enable Chaos Studio on your virtual machine scale sets ++Azure Chaos Studio Preview can't inject faults against a resource until that resource is added to Chaos Studio. To add a resource to Chaos Studio, create a [target and capabilities](chaos-studio-targets-capabilities.md) on the resource. ++Virtual Machine Scale Sets has only one target type (`Microsoft-VirtualMachineScaleSet`) and one capability (`shutdown`). Other resources might have up to two target types. One target type is for service-direct faults. Another target type is for agent-based faults. Other resources also might have many other capabilities. 1. Open the [Azure portal](https://portal.azure.com). 1. Search for **Chaos Studio** in the search bar.-1. Select **Targets** and find your Virtual Machine Scale Sets resource. -1. With the Virtual Machine Scale Sets resource selected, select **Enable targets** and **Enable service-direct targets**. -[  ](images/tutorial-dynamic-targets-enable.png#lightbox) -1. Select **Review + Enable** and **Enable**. +1. Select **Targets** and find your virtual machine scale set resource. +1. Select the virtual machine scale set resource and select **Enable targets** > **Enable service-direct targets**. -You've now successfully onboarded your Virtual Machine Scale Set to Chaos Studio. + [ ](images/tutorial-dynamic-targets-enable.png#lightbox) +1. Select **Review + Enable** > **Enable**. ++You've now successfully added your virtual machine scale set to Chaos Studio. ## Create an experiment -With your Virtual Machine Scale Sets now onboarded, you can create your experiment. A chaos experiment defines the actions you want to take against target resources, organized into steps, which run sequentially, and branches, which run in parallel. +Now you can create your experiment. A chaos experiment defines the actions you want to take against target resources. The actions are organized and run in sequential steps. The chaos experiment also defines the actions you want to take against branches, which run in parallel. ++1. In Chaos Studio, go to **Experiments** > **Create**. ++ [](images/tutorial-dynamic-targets-experiment-browse.png#lightbox) +1. Add a name for your experiment that complies with resource naming guidelines. Select **Next: Experiment designer**. ++ [](images/tutorial-dynamic-targets-create-exp.png#lightbox) +1. In **Step 1** and **Branch 1**, select **Add action** > **Add fault**. ++ [](images/tutorial-dynamic-targets-experiment-fault.png#lightbox) +1. Select the **VMSS Shutdown (version 2.0)** fault. Select your desired duration and if you want the shutdown to be abrupt. Select **Next: Target resources**. -1. Within Chaos Studio, navigate to **Experiments** and select **Create**. -[ ](images/tutorial-dynamic-targets-experiment-browse.png#lightbox) -1. Add a name for your experiment that complies with resource naming guidelines, and select **Next: Experiment designer**. -[ ](images/tutorial-dynamic-targets-create-exp.png#lightbox) -1. Within Step 1 and Branch 1, select **Add action**, then **Add fault**. -[ ](images/tutorial-dynamic-targets-experiment-fault.png#lightbox) -1. Select the **VMSS Shutdown (version 2.0)** fault. Choose your desired duration and whether you want the shutdown to be abrupt, then select **Next: Target resources**. -[ ](images/tutorial-dynamic-targets-fault-details.png#lightbox) -1. Choose the Virtual Machine Scale Sets resource that you want to use in the experiment, then select **Next: Scope**. -[ ](images/tutorial-dynamic-targets-fault-resources.png#lightbox) -1. In the Zones dropdown, select the zone where you want Virtual Machines in the Virtual Machine Scale Sets instance to be shut down, then select **Add**. -[ ](images/tutorial-dynamic-targets-fault-zones.png#lightbox) -1. Select **Review + create** and then **Create** to save the experiment. + [](images/tutorial-dynamic-targets-fault-details.png#lightbox) +1. Select the virtual machine scale set resource that you want to use in the experiment. Select **Next: Scope**. -## Give experiment permission to your Virtual Machine Scale Sets + [](images/tutorial-dynamic-targets-fault-resources.png#lightbox) +1. In the **Zones** dropdown list, select the zone where you want virtual machines (VMs) in the Virtual Machine Scale Sets instance to be shut down. Select **Add**. -When you create a chaos experiment, Chaos Studio creates a system-assigned managed identity that executes faults against your target resources. This identity must be given [appropriate permissions](chaos-studio-fault-providers.md) to the target resource for the experiment to run successfully. These steps can be used for any resource and target type by modifying the role assignment in step #3 to match the [appropriate role for that resource and target type](chaos-studio-fault-providers.md). + [](images/tutorial-dynamic-targets-fault-zones.png#lightbox) +1. Select **Review + create** > **Create** to save the experiment. -1. Navigate to your Virtual Machine Scale Sets resource and select **Access control (IAM)**, then select **Add role assignment**. -[ ](images/tutorial-dynamic-targets-vmss-iam.png#lightbox) -3. In the **Role** tab, choose **Virtual Machine Contributor** and then select **Next**. -[ ](images/tutorial-dynamic-targets-role-selection.png#lightbox) -1. Choose **Select members** and search for your experiment name. Choose your experiment and then **Select**. If there are multiple experiments in the same tenant with the same name, your experiment name is truncated with random characters added. -[ ](images/tutorial-dynamic-targets-role-assignment.png#lightbox) -1. Select **Review + assign** then **Review + assign**. -[ ](images/tutorial-dynamic-targets-role-confirmation.png#lightbox) +## Give the experiment permission to your virtual machine scale sets +When you create a chaos experiment, Chaos Studio creates a system-assigned managed identity that executes faults against your target resources. This identity must be given [appropriate permissions](chaos-studio-fault-providers.md) to the target resource for the experiment to run successfully. To use these steps for any resource and target type, modify the role assignment in step 3 to match the [appropriate role for that resource and target type](chaos-studio-fault-providers.md). ++1. Go to your virtual machine scale set resource and select **Access control (IAM)** > **Add role assignment**. ++ [](images/tutorial-dynamic-targets-vmss-iam.png#lightbox) +1. On the **Role** tab, select **Virtual Machine Contributor** and select **Next**. ++ [](images/tutorial-dynamic-targets-role-selection.png#lightbox) +1. Choose **Select members** and search for your experiment name. Select your experiment and then choose **Select**. If there are multiple experiments in the same tenant with the same name, your experiment name is truncated with random characters added. ++ [](images/tutorial-dynamic-targets-role-assignment.png#lightbox) +1. Select **Review + assign** > **Review + assign**. ++ [](images/tutorial-dynamic-targets-role-confirmation.png#lightbox) ## Run your experiment -You're now ready to run your experiment! +You're now ready to run your experiment. ++1. In **Chaos Studio**, go to the **Experiments** view, select your experiment, and select **Start experiment(s)**. -1. In **Chaos Studio**, navigate to the **Experiments** view, choose your experiment, and select **Start**. -[ ](images/tutorial-dynamic-targets-start-experiment.png#lightbox) + [](images/tutorial-dynamic-targets-start-experiment.png#lightbox) 1. Select **OK** to confirm that you want to start the experiment.-1. When the **Status** changes to **Running**, select **Details** for the latest run under **History** to see details for the running experiment. If any errors occur, you can view them within **Details** by selecting a failed Action and expanding **Failed targets**. +1. When the **Status** changes to *Running*, select **Details** for the latest run under **History** to see details for the running experiment. If any errors occur, you can view them in **Details**. Select a failed action and expand **Failed targets**. -To see the impact, use a tool such as **Azure Monitor** or the **Virtual Machine Scale Sets** section of the portal to check if your Virtual Machine Scale Sets targets are shut down. If they're shut down, check to see that the services running on your Virtual Machine Scale Sets are still running as expected. +To see the effect, use a tool like **Azure Monitor** or the **Virtual Machine Scale Sets** section of the portal to check if your virtual machine scale set targets are shut down. If they're shut down, check to see that the services running on your virtual machine scale sets are still running as expected. In this example, the chaos experiment successfully shut down the instance in Zone 1, as expected.-[ ](images/tutorial-dynamic-targets-view-vmss.png#lightbox) ++[](images/tutorial-dynamic-targets-view-vmss.png#lightbox) ## Next steps > [!TIP]-> If your Virtual Machine Scale Set uses an autoscale policy, the policy will provision new VMs after this experiment shuts down existing VMs. To prevent this, add a parallel branch in your experiment that includes the **Disable Autoscale** fault against the Virtual Machine Scale Set's `microsoft.insights/autoscaleSettings` resource. Remember to onboard the autoscaleSettings resource as a Target and assign the role. +> If your virtual machine scale set uses an autoscale policy, the policy provisions new VMs after this experiment shuts down existing VMs. To prevent this action, add a parallel branch in your experiment that includes the **Disable Autoscale** fault against the virtual machine scale set `microsoft.insights/autoscaleSettings` resource. Remember to add the `autoscaleSettings` resource as a target and assign the role. -Now that you've run a dynamically targeted Virtual Machine Scale Sets shutdown experiment, you're ready to: +Now that you've run a dynamically targeted virtual machine scale set shutdown experiment, you're ready to: - [Create an experiment that uses agent-based faults](chaos-studio-tutorial-agent-based-portal.md) - [Manage your experiment](chaos-studio-run-experiment.md) |
chaos-studio | Chaos Studio Tutorial Service Direct Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-service-direct-cli.md | Title: Create an experiment that uses a service-direct fault using Azure Chaos Studio with the Azure CLI -description: Create an experiment that uses a service-direct fault with the Azure CLI + Title: Create an experiment using a service-direct fault with Azure CLI +description: Create an experiment that uses a service-direct fault with the Azure CLI. ms.devlang: azurecli # Create a chaos experiment that uses a service-direct fault with the Azure CLI -You can use a chaos experiment to verify that your application is resilient to failures by causing those failures in a controlled environment. In this guide, you will cause a multi-read, single-write Azure Cosmos DB failover using a chaos experiment and Azure Chaos Studio. Running this experiment can help you defend against data loss when a failover event occurs. +You can use a chaos experiment to verify that your application is resilient to failures by causing those failures in a controlled environment. In this article, you cause a multi-read, single-write Azure Cosmos DB failover by using a chaos experiment and Azure Chaos Studio Preview. Running this experiment can help you defend against data loss when a failover event occurs. -These same steps can be used to set up and run an experiment for any service-direct fault. A **service-direct** fault runs directly against an Azure resource without any need for instrumentation, unlike agent-based faults, which require installation of the chaos agent. +You can use these same steps to set up and run an experiment for any service-direct fault. A *service-direct* fault runs directly against an Azure resource without any need for instrumentation, unlike agent-based faults, which require installation of the chaos agent. ## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] -- An Azure Cosmos DB account. If you do not have an Azure Cosmos DB account, you can [follow these steps to create one](../cosmos-db/sql/create-cosmosdb-resources-portal.md).+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] +- An Azure Cosmos DB account. If you don't have an Azure Cosmos DB account, you can [create one](../cosmos-db/sql/create-cosmosdb-resources-portal.md). - At least one read and one write region setup for your Azure Cosmos DB account. -## Launch Azure Cloud Shell +## Open Azure Cloud Shell -The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. +Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. -To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also open Cloud Shell in a separate browser tab by going to [https://shell.azure.com/bash](https://shell.azure.com/bash). Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and select **Enter** to run it. +To open Cloud Shell, select **Try it** in the upper-right corner of a code block. You can also open Cloud Shell in a separate browser tab by going to [Bash](https://shell.azure.com/bash). Select **Copy** to copy the blocks of code, paste it into Cloud Shell, and select **Enter** to run it. -If you prefer to install and use the CLI locally, this tutorial requires Azure CLI version 2.0.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli). +If you want to install and use the CLI locally, this tutorial requires Azure CLI version 2.0.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli). > [!NOTE]-> These instructions use a Bash terminal in Azure Cloud Shell. Some commands may not work as described if running the CLI locally or in a PowerShell terminal. +> These instructions use a Bash terminal in Cloud Shell. Some commands might not work as described if you're running the CLI locally or in a PowerShell terminal. ## Enable Chaos Studio on your Azure Cosmos DB account -Chaos Studio cannot inject faults against a resource unless that resource has been onboarded to Chaos Studio first. You onboard a resource to Chaos Studio by creating a [target and capabilities](chaos-studio-targets-capabilities.md) on the resource. Azure Cosmos DB accounts only have one target type (service-direct) and one capability (failover), but other resources may have up to two target types - one for service-direct faults and one for agent-based faults - and many capabilities. +Chaos Studio can't inject faults against a resource unless that resource was added to Chaos Studio first. You add a resource to Chaos Studio by creating a [target and capabilities](chaos-studio-targets-capabilities.md) on the resource. Azure Cosmos DB accounts have only one target type (service-direct) and one capability (failover). Other resources might have up to two target types. One target type is for service-direct faults. Another target type is for agent-based faults. Other resources might have many other capabilities. -1. Create a target by replacing `$RESOURCE_ID` with the resource ID of the resource you are onboarding and `$TARGET_TYPE` with the [target type you are onboarding](chaos-studio-fault-providers.md): +1. Create a target by replacing `$RESOURCE_ID` with the resource ID of the resource you're adding. Replace `$TARGET_TYPE` with the [target type you're adding](chaos-studio-fault-providers.md): ```azurecli-interactive az rest --method put --url "https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/$TARGET_TYPE?api-version=2021-09-15-preview" --body "{\"properties\":{}}" ``` - For example, if onboarding a virtual machine as a service-direct target: + For example, if you're adding a virtual machine as a service-direct target: ```azurecli-interactive az rest --method put --url "https://management.azure.com/subscriptions/b65f2fec-d6b2-4edd-817e-9339d8c01dc4/resourceGroups/myRG/providers/Microsoft.Compute/virtualMachines/myVM/providers/Microsoft.Chaos/targets/Microsoft-VirtualMachine?api-version=2021-09-15-preview" --body "{\"properties\":{}}" ``` -2. Create the capabilities on the target by replacing `$RESOURCE_ID` with the resource ID of the resource you are onboarding, `$TARGET_TYPE` with the [target type you are onboarding](chaos-studio-fault-providers.md) and `$CAPABILITY` with the [name of the fault capability you are enabling](chaos-studio-fault-library.md). +1. Create the capabilities on the target by replacing `$RESOURCE_ID` with the resource ID of the resource you're adding. Replace `$TARGET_TYPE` with the [target type you're adding](chaos-studio-fault-providers.md). Replace `$CAPABILITY` with the [name of the fault capability you're enabling](chaos-studio-fault-library.md). ```azurecli-interactive az rest --method put --url "https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/$TARGET_TYPE/capabilities/$CAPABILITY?api-version=2021-09-15-preview" --body "{\"properties\":{}}" ``` - For example, if enabling the Virtual Machine shut down capability: + For example, if you're enabling the virtual machine shutdown capability: ```azurecli-interactive az rest --method put --url "https://management.azure.com/subscriptions/b65f2fec-d6b2-4edd-817e-9339d8c01dc4/resourceGroups/myRG/providers/Microsoft.Compute/virtualMachines/myVM/providers/Microsoft.Chaos/targets/Microsoft-VirtualMachine/capabilities/shutdown-1.0?api-version=2021-09-15-preview" --body "{\"properties\":{}}" ``` -You have now successfully onboarded your Azure Cosmos DB account to Chaos Studio. +You've now successfully added your Azure Cosmos DB account to Chaos Studio. ## Create an experiment-With your Azure Cosmos DB account now onboarded, you can create your experiment. A chaos experiment defines the actions you want to take against target resources, organized into steps, which run sequentially, and branches, which run in parallel. +Now you can create your experiment. A chaos experiment defines the actions you want to take against target resources. The actions are organized and run in sequential steps. The chaos experiment also defines the actions you want to take against branches, which run in parallel. -1. Formulate your experiment JSON starting with the JSON sample below. Modify the JSON to correspond to the experiment you want to run using the [Create Experiment API](/rest/api/chaosstudio/experiments/create-or-update) and the [fault library](chaos-studio-fault-library.md) +1. Formulate your experiment JSON starting with the following JSON sample. Modify the JSON to correspond to the experiment you want to run by using the [Create Experiment API](/rest/api/chaosstudio/experiments/create-or-update) and the [fault library](chaos-studio-fault-library.md). ```json { With your Azure Cosmos DB account now onboarded, you can create your experiment. } ``` -2. Create the experiment using the Azure CLI, replacing `$SUBSCRIPTION_ID`, `$RESOURCE_GROUP`, and `$EXPERIMENT_NAME` with the properties for your experiment. Make sure you have saved and uploaded your experiment JSON and update `experiment.json` with your JSON filename. +1. Create the experiment by using the Azure CLI. Replace `$SUBSCRIPTION_ID`, `$RESOURCE_GROUP`, and `$EXPERIMENT_NAME` with the properties for your experiment. Make sure that you've saved and uploaded your experiment JSON. Update `experiment.json` with your JSON filename. ```azurecli-interactive az rest --method put --uri https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Chaos/experiments/$EXPERIMENT_NAME?api-version=2021-09-15-preview --body @experiment.json ``` - Each experiment creates a corresponding system-assigned managed identity. Note of the `principalId` for this identity in the response for the next step. + Each experiment creates a corresponding system-assigned managed identity. Note the principal ID for this identity in the response for the next step. -## Give experiment permission to your Azure Cosmos DB account +## Give the experiment permission to your Azure Cosmos DB account When you create a chaos experiment, Chaos Studio creates a system-assigned managed identity that executes faults against your target resources. This identity must be given [appropriate permissions](chaos-studio-fault-providers.md) to the target resource for the experiment to run successfully. -Give the experiment access to your resource(s) using the command below, replacing `$EXPERIMENT_PRINCIPAL_ID` with the principalId from the previous step and `$RESOURCE_ID` with the resource ID of the target resource (in this case, the Azure Cosmos DB instance resource ID). Change the role to the appropriate [built-in role for that resource type](chaos-studio-fault-providers.md). Run this command for each resource targeted in your experiment. +Give the experiment access to your resources by using the following command. Replace `$EXPERIMENT_PRINCIPAL_ID` with the principal ID from the previous step. Replace `$RESOURCE_ID` with the resource ID of the target resource. In this case, it's the Azure Cosmos DB instance resource ID. Change the role to the appropriate [built-in role for that resource type](chaos-studio-fault-providers.md). Run this command for each resource targeted in your experiment. ```azurecli-interactive az role assignment create --role "Cosmos DB Operator" --assignee-object-id $EXPERIMENT_PRINCIPAL_ID --scope $RESOURCE_ID ``` ## Run your experiment-You are now ready to run your experiment. To see the impact, we recommend opening your Azure Cosmos DB account overview and going to **Replicate data globally** in a separate browser tab. Refreshing periodically during the experiment will show the region swap. +You're now ready to run your experiment. To see the effect, we recommend that you open your Azure Cosmos DB account overview and go to **Replicate data globally** in a separate browser tab. Refresh periodically during the experiment to show the region swap. -1. Start the experiment using the Azure CLI, replacing `$SUBSCRIPTION_ID`, `$RESOURCE_GROUP`, and `$EXPERIMENT_NAME` with the properties for your experiment. +1. Start the experiment by using the Azure CLI. Replace `$SUBSCRIPTION_ID`, `$RESOURCE_GROUP`, and `$EXPERIMENT_NAME` with the properties for your experiment. ```azurecli-interactive az rest --method post --uri https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Chaos/experiments/$EXPERIMENT_NAME/start?api-version=2021-09-15-preview ``` -2. The response includes a status URL that you can use to query experiment status as the experiment runs. +1. The response includes a status URL that you can use to query experiment status as the experiment runs. ## Next steps-Now that you have run an Azure Cosmos DB service-direct experiment, you are ready to: +Now that you've run an Azure Cosmos DB service-direct experiment, you're ready to: - [Create an experiment that uses agent-based faults](chaos-studio-tutorial-agent-based-portal.md) - [Manage your experiment](chaos-studio-run-experiment.md) |
cognitive-services | Concept Background Removal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-background-removal.md | It's important to note the limitations of background removal: ## Use the API -The background removal feature is available through the [Segment](https://centraluseuap.dev.cognitive.microsoft.com/docs/services/unified-vision-apis-public-preview-2023-02-01-preview/operations/63e6b6d9217d201194bbecbd) API (`imageanalysis:segment`). You can call this API through REST calls. See the [Background removal how-to guide](./how-to/background-removal.md) for more information. +The background removal feature is available through the [Segment](https://centraluseuap.dev.cognitive.microsoft.com/docs/services/unified-vision-apis-public-preview-2023-02-01-preview/operations/63e6b6d9217d201194bbecbd) API (`imageanalysis:segment`). You can call this API through the REST API or the Vision SDK. See the [Background removal how-to guide](./how-to/background-removal.md) for more information. ## Next steps |
cognitive-services | Speech Container Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-overview.md | The following table lists the Speech containers available in the Microsoft Conta | Container | Features | Supported versions and locales | |--|--|--|-| [Speech to text](speech-container-stt.md) | Transcribes continuous real-time speech or batch audio recordings with intermediate results. | Latest: 3.12.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list).| -| [Custom speech to text](speech-container-cstt.md) | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | Latest: 3.12.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list). | +| [Speech to text](speech-container-stt.md) | Transcribes continuous real-time speech or batch audio recordings with intermediate results. | Latest: 3.14.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list).| +| [Custom speech to text](speech-container-cstt.md) | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | Latest: 3.14.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list). | | [Speech language identification](speech-container-lid.md)<sup>1, 2</sup> | Detects the language spoken in audio files. | Latest: 1.11.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/language-detection/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/language-detection/tags/list). |-| [Neural text to speech](speech-container-ntts.md) | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | Latest: 2.11.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/neural-text-to-speech/tags/list). | +| [Neural text to speech](speech-container-ntts.md) | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | Latest: 2.13.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/neural-text-to-speech/tags/list). | <sup>1</sup> The container is available in public preview. Containers in preview are still under development and don't meet Microsoft's stability and support requirements. <sup>2</sup> Not available as a disconnected container. |
cognitive-services | Use Blocklist | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/content-safety/how-to/use-blocklist.md | Copy the cURL command below to a text editor and make the following changes: 1. Replace the value of the `"text"` field with the item you'd like to add to your blocklist. The maximum length of a blockItem is 128 characters. ```shell-curl --location --request PATCH '<endpoint>/contentsafety/text/blocklists/<your_list_id>:addBlockItems?api-version=2023-04-30-preview' \ +curl --location --request POST '<endpoint>/contentsafety/text/blocklists/<your_list_id>:addBlockItems?api-version=2023-04-30-preview' \ --header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \ --header 'Content-Type: application/json' \ --data-raw '"blockItems": [{ |
cognitive-services | Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md | Currently, we offer three families of Embeddings models for different functional Each family includes models across a range of capability. The following list indicates the length of the numerical vector returned by the service, based on model capability: -- Ada: 1024 dimensions-- Babbage: 2048 dimensions-- Curie: 4096 dimensions-- Davinci: 12288 dimensions+| Base Model | Model(s) | Dimensions | +|||| +| Ada | models ending in -001 (Version 1) | 1024 | +| Ada | text-embedding-ada-002 (Version 2) | 1536 | +| Babbage | | 2048 | +| Curie | | 4096 | +| Davinci | | 12288 | Davinci is the most capable, but is slower and more expensive than the other models. Ada is the least capable, but is both faster and cheaper. These models can be used with Completion API requests. `gpt-35-turbo` is the onl | text-davinci-fine-tune-002 | N/A | N/A | | | | gpt-35-turbo<sup>1</sup> (ChatGPT) | East US, France Central, South Central US, West Europe | N/A | 4,096 | Sep 2021 | -<br><sup>1</sup> Currently, only version `0301` of this model is available. This version of the model will be deprecated on 8/1/2023 in favor of newer version of the gpt-35-model. See [ChatGPT model versioning](../how-to/chatgpt.md#model-versioning) for more details. +<br><sup>1</sup> Currently, only version `0301` of this model is available. ### GPT-4 Models |
cognitive-services | System Message | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/system-message.md | When using the system message to demonstrate the intended behavior of the model ## Define additional behavioral guardrails -When defining additional safety and behavioral guardrails, itΓÇÖs helpful to first identify and prioritize [the harms](/legal/cognitive-services/openai/overview?context=/azure/cognitive-services/openai/context/context) youΓÇÖd like to address. Depending on the application, the sensitivity and severity of certain harms could be more important than others. Below, weΓÇÖve outlined some system message templates that may help mitigate some of the common harms that have been seen with LLMs, such as fabrication of content (that is not grounded or relevant), jailbreaks, and manipulation. +When defining additional safety and behavioral guardrails, itΓÇÖs helpful to first identify and prioritize [the harms](/legal/cognitive-services/openai/overview?context=/azure/cognitive-services/openai/context/context) youΓÇÖd like to address. Depending on the application, the sensitivity and severity of certain harms could be more important than others. ## Next steps |
communication-services | Number Lookup Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/number-lookup-sdk.md | The following list presents the set of features which are currently available in | | Get Carrier registered name | ✔️ | ❌ | ❌ | ❌ | | | Get associated Mobile Network Code, if available(two or three decimal digits used to identify network operator within a country) | ✔️ | ❌ | ❌ | ❌ | | | Get associated Mobile Country Code, if available(three decimal digits used to identify the country of a mobile operator) | ✔️ | ❌ | ❌ | ❌ |+| | Get associated ISO Country Code | ✔️ | ❌ | ❌ | ❌ | | Phone Number | All number types in E164 format | ✔️ | ❌ | ❌ | ❌ | |
communication-services | Callkit Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/callkit-integration.md | description: Steps on how to integrate CallKit with ACS Calling SDK # Integrate with CallKit In this document, we'll go through how to integrate CallKit with your iOS application. -- > [!NOTE] - > This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment. To use this api please use 'beta' release of Azure Communication Services Calling iOS SDK -+ ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). description: Steps on how to integrate CallKit with ACS Calling SDK options.callKitRemoteInfo = CallKitRemoteInfo() ``` - 1. Assign value for `callKitRemoteInfo.displayNameForCallKit` to customize display name for call recipients and configure `CXHandle` value. This value specified in `displayNameForCallKit` is exactly how it will show up in the last dialed call log. + 1. Assign value for `callKitRemoteInfo.displayNameForCallKit` to customize display name for call recipients and configure `CXHandle` value. This value specified in `displayNameForCallKit` is exactly how it shows up in the last dialed call log. in the last dialed call log. ```Swift options.callKitRemoteInfo.displayNameForCallKit = "DISPLAY_NAME" ```- 2. Assign the `cxHandle` value is what the application will receive when user calls back on that contact + 2. Assign the `cxHandle` value is what the application receives when user calls back on that contact ```Swift options.callKitRemoteInfo.cxHandle = CXHandle(type: .generic, value: "VALUE_TO_CXHANDLE") ``` description: Steps on how to integrate CallKit with ACS Calling SDK return nil } ```- if `nil` is provided for `configureAudioSession` then SDK will call the default implementation in the SDK. + if `nil` is provided for `configureAudioSession` then SDK calls the default implementation in the SDK. ### Handle incoming push notification payload - When the app receives incoming push notification payload, we need to call `handlePush` to process it. ACS Calling SDK will then raise the `IncomingCall` event. + When the app receives incoming push notification payload, we need to call `handlePush` to process it. ACS Calling SDK will raise the `IncomingCall` event. ```Swift public func handlePushNotification(_ pushPayload: PKPushPayload) description: Steps on how to integrate CallKit with ACS Calling SDK } ``` - We can use `reportIncomingCallFromKillState` to handle push notifications when the app is closed. - `reportIncomingCallFromKillState` API shouldn't be called if `CallAgent` instance is already available when push is received. + We can use `reportIncomingCall` to handle push notifications when the app is closed or otherwise. ```Swift if let agent = self.callAgent { description: Steps on how to integrate CallKit with ACS Calling SDK agent.handlePush(notification: callNotification) { (error) in } } else { /* App is in a killed state */- CallClient.reportIncomingCallFromKillState(with: callNotification, callKitOptions: callKitOptions) { (error) in + CallClient.reportIncomingCall(with: callNotification, callKitOptions: callKitOptions) { (error) in if (error == nil) { DispatchQueue.global().async { self.callClient = CallClient() description: Steps on how to integrate CallKit with ACS Calling SDK }) } } else {- os_log("SDK couldn't handle push notification KILL mode reportToCallKit FAILED", log:self.log) + os_log("SDK couldn't handle push notification", log:self.log) } } } description: Steps on how to integrate CallKit with ACS Calling SDK ## CallKit Integration (within App) - If you wish to integrate the CallKit within the app and not use the CallKit implementation in the SDK, please take a look at the quickstart sample [here](https://github.com/Azure-Samples/communication-services-ios-quickstarts/tree/main/Add%20Video%20Calling). + If you wish to integrate the CallKit within the app and not use the CallKit implementation in the SDK, refer to the quickstart sample [here](https://github.com/Azure-Samples/communication-services-ios-quickstarts/tree/main/add-video-calling). But one of the important things to take care of is to start the audio at the right time. Like following ```Swift-let mutedAudioOptions = AudioOptions() -mutedAudioOptions.speakerMuted = true -mutedAudioOptions.muted = true +let outgoingAudioOptions = OutgoingAudioOptions() +outgoingAudioOptions.muted = true -let copyStartCallOptions = StartCallOptions() -copyStartCallOptions.audioOptions = mutedAudioOptions +let incomingAudioOptions = IncomingAudioOptions() +incomingAudioOptions.muted = true ++var copyAcceptCallOptions = AcceptCallOptions() +copyStartCallOptions.outgoingAudioOptions = outgoingAudioOptions +copyStartCallOptions.incomingAudioOptions = incomingAudioOptions callAgent.startCall(participants: participants, options: copyStartCallOptions, completionHandler: completionBlock) ``` -Muting speaker and microphone will ensure that physical audio devices aren't used until the CallKit calls the `didActivateAudioSession` on `CXProviderDelegate`. Otherwise the call may get dropped or no audio will be flowing. +Muting speaker and microphone ensure that physical audio devices aren't used until the CallKit calls the `didActivateAudioSession` on `CXProviderDelegate`. Otherwise the call may get dropped or audio will not work. +When `didActivateAudioSession` is when the audio streams should be started. ```Swift func provider(_ provider: CXProvider, didActivate audioSession: AVAudioSession) {- activeCall.unmute { error in - if error == nil { - print("Successfully unmuted mic") - activeCall.speaker(mute: false) { error in - if error == nil { - print("Successfully unmuted speaker") - } - } - } - } + Task { + guard let activeCall = await self.callKitHelper.getActiveCall() else { + print("No active calls found when activating audio session !!") + return + } ++ try await startAudio(call: activeCall) + } +} ++func provider(_ provider: CXProvider, didDeactivate audioSession: AVAudioSession) { + Task { + guard let activeCall = await self.callKitHelper.getActiveCall() else { + print("No active calls found when deactivating audio session !!") + return + } ++ try await stopAudio(call: activeCall) + } +} ++private func stopAudio(call: Call) async throws { + try await self.callKitHelper.muteCall(callId: call.id, isMuted: true) + try await call.stopAudio(stream: call.activeOutgoingAudioStream) ++ try await call.stopAudio(stream: call.activeIncomingAudioStream) + try await call.muteIncomingAudio() +} ++private func startAudio(call: Call) async throws { + try await call.startAudio(stream: LocalOutgoingAudioStream()) + try await self.callKitHelper.muteCall(callId: call.id, isMuted: false) ++ try await call.startAudio(stream: RemoteIncomingAudioStream()) + try await call.unmuteIncomingAudio() }+ ```+It's important to also mute the outgoing audio before stopping the audio in cases when CallKit does not invoke `didActivateAudioSession`. The user can then manually unmute the microphone. > [!NOTE] > In some cases CallKit doesn't call `didActivateAudioSession` even though the app has elevated audio permissions, in that case the audio will stay muted until the call back is received. And the UI has to reflect the state of the speaker and microphone. The remote participant/s in the call will see that the user has muted audio as well. User will have to manually unmute in those cases. |
communication-services | Number Lookup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/telephony/number-lookup.md | +> [!NOTE] +> Find the code for this quickstart on [GitHub](https://github.com/Azure/communication-preview/tree/master/samples/NumberLookup). + ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).+- To enable Number Lookup service on your Azure Communication Services subscription, please complete this [form](https://forms.microsoft.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR058xZQ9HIBLikwspEUN6t5URUVDTTdWMEg5VElQTFpaMVMyM085ODkwVS4u) for us to allow-list your subscription. - The latest version of [.NET Core client library](https://dotnet.microsoft.com/download/dotnet-core) for your operating system. - An active Communication Services resource and connection string. [Create a Communication Services resource](../create-communication-resource.md). dotnet build While still in the application directory, install the Azure Communication Services PhoneNumbers client library for .NET package by using the following command. ```console-dotnet add package Azure.Communication.PhoneNumbers --version 1.0.0 +dotnet add package Azure.Communication.PhoneNumbers --version 1.2.0-alpha.20230531.2 ``` Add a `using` directive to the top of **Program.cs** to include the `Azure.Communication` namespace. using Azure.Communication.PhoneNumbers; Update `Main` function signature to be async. ```csharp-static async Task Main(string[] args) +internal class Program {- ... + static async Task Main(string[] args) + { + ... + } } ``` It's recommended to use a `COMMUNICATION_SERVICES_CONNECTION_STRING` environment ```csharp // This code retrieves your connection string from an environment variable.-string connectionString = Environment.GetEnvironmentVariable("COMMUNICATION_SERVICES_CONNECTION_STRING"); +string? connectionString = Environment.GetEnvironmentVariable("COMMUNICATION_SERVICES_CONNECTION_STRING"); PhoneNumbersClient client = new PhoneNumbersClient(connectionString, new PhoneNumbersClientOptions(PhoneNumbersClientOptions.ServiceVersion.V2023_05_01_Preview)); ``` Run the application from your application directory with the `dotnet run` comman dotnet run ``` +## Sample code ++You can download the sample app from [GitHub](https://github.com/Azure/communication-preview/tree/master/samples/NumberLookup). + ## Troubleshooting Common questions and issues: In this quickstart you learned how to: > [!div class="checklist"] > * Look up operator information for a phone number -[!div class="nextstepaction"] -[Send an SMS](../sms/send.md) +> [!div class="nextstepaction"] +> [Number Lookup Concept](../../concepts/numbers/number-lookup-concept.md) ++> [!div class="nextstepaction"] +> [Number Lookup SDK](../../concepts/numbers/number-lookup-sdk.md) |
confidential-computing | Multi Party Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/multi-party-data.md | Use a partner that has built a multi-party data analytics solution on top of the - [**Anjuna**](https://www.anjuna.io/use-case-solutions) provides a confidential computing platform to enable various use cases, including secure clean rooms, for organizations to share data for joint analysis, such as calculating credit risk scores or developing machine learning models, without exposing sensitive information. - [**BeeKeeperAI**](https://www.beekeeperai.com/) enables healthcare AI through a secure collaboration platform for algorithm owners and data stewards. BeeKeeperAIΓäó uses privacy-preserving analytics on multi-institutional sources of protected data in a confidential computing environment. The solution supports end-to-end encryption, secure computing enclaves, and Intel's latest SGX enabled processors to protect the data and the algorithm IP.-- [**Decentriq**](https://www.decentriq.com/) provides Software as a Service (SaaS) data clean rooms to enable companies to collaborate with other organizations on their most sensitive datasets and create value for their clients. The technologies help prevent anyone to see the sensitive data, including Decentriq.+- [**Decentriq**](https://www.decentriq.com/) provides SaaS data cleanrooms built on confidential computing that enable secure data collaboration without sharing data. Data science cleanrooms allow flexible multi-party analysis, and no-code cleanrooms for media and advertising enable compliant audience activation and analytics based on first-party user data. Confidential cleanrooms are described in more detail in [this article on the Microsoft blog](https://techcommunity.microsoft.com/t5/azure-confidential-computing/confidential-data-clean-rooms-the-evolution-of-sensitive-data/ba-p/3273844). - [**Fortanix**](https://www.fortanix.com/platform/confidential-ai) provides a confidential computing platform that can enable confidential AI, including multiple organizations collaborating together for multi-party analytics. - [**Habu**](https://habu.com) delivers an interoperable data clean room platform that enables businesses to unlock collaborative intelligence in a smart, secure, scalable, and simple way. Habu connects decentralized data across departments, partners, customers, and providers for better collaboration, decision-making, and results. - [**Mithril Security**](https://www.mithrilsecurity.io/) provides tooling to help SaaS vendors serve AI models inside secure enclaves, and providing an on-premises level of security and control to data owners. Data owners can use their SaaS AI solutions while remaining compliant and in control of their data. |
container-apps | Azure Resource Manager Api Spec | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-resource-manager-api-spec.md | Changes made to the `template` section are [revision-scope changes](revisions.md ### <a name="container-app-examples"></a>Examples -For details on health probes, refer to [Heath probes in Azure Container Apps](./health-probes.md). +For details on health probes, refer to [Health probes in Azure Container Apps](./health-probes.md). # [ARM template](#tab/arm-template) |
container-apps | Billing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/billing.md | When a revision is scaled above the [minimum replica count](scale-app.md), all o ### Request charges -In addition to resource consumption, Azure Container Apps also charges based on the number of HTTP requests received by your container app. +In addition to resource consumption, Azure Container Apps also charges based on the number of HTTP requests received by your container app. Only requests that come from outside a Container Apps environment are billable. -The first 2 million requests in each subscription per calendar month are free. +- The first 2 million requests in each subscription per calendar month are free. +- [Health probe](./health-probes.md) requests are not billable. <a id="consumption-dedicated"></a> The billing for apps running in the Dedicated plan within the Consumption + Dedi For instance, you are not billed any charges for Dedicated unless you use a Dedicated workload profile in your environment. +## General terms For pricing details in your account's currency, see [Azure Container Apps Pricing](https://azure.microsoft.com/pricing/details/container-apps/). |
container-apps | Containerapp Up | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/containerapp-up.md | Title: Deploy Azure Container Apps with the az containerapp up command description: How to deploy a container app with the az containerapp up command -+ Last updated 11/08/2022-+ # Deploy Azure Container Apps with the az containerapp up command The `az containerapp up` (or `up`) command is the fastest way to deploy an app in Azure Container Apps from an existing image, local source code or a GitHub repo. With this single command, you can have your container app up and running in minutes. -The `az containerapp up` command is a streamlined way to create and deploy container apps that primarily use default settings. However, you'll need to use the `az containerapp create` command for apps with customizations such as: +The `az containerapp up` command is a streamlined way to create and deploy container apps that primarily use default settings. However, you'll need to run other CLI commands to configure more advanced settings: -- Dapr configuration-- Secrets-- Transport protocols-- Custom domains-- Storage mounts+- Dapr: [`az containerapp dapr enable`](/cli/azure/containerapp/dapr#az-containerapp-dapr-enable) +- Secrets: [`az containerapp secret set`](/cli/azure/containerapp/secret#az-containerapp-secret-set) +- Transport protocols: [`az containerapp ingress update`](/cli/azure/containerapp/ingress#az-containerapp-ingress-update) To customize your container app's resource or scaling settings, you can use the `up` command and then the `az containerapp update` command to change these settings. Note that the `az containerapp up` command isn't an abbreviation of the `az containerapp update` command. The `up` command can create or use existing resources including: The command can build and push a container image to an Azure Container Registry (ACR) when you provide local source code or a GitHub repo. When you're working from a GitHub repo, it creates a GitHub Actions workflow that automatically builds and pushes a new container image when you commit changes to your GitHub repo. - If you need to customize the Container Apps environment, first create the environment using the `az containerapp env create` command. If you don't provide an existing environment, the `up` command looks for one in your resource group and, if found, uses that environment. If not found, it creates an environment with a Log Analytics workspace. +If you need to customize the Container Apps environment, first create the environment using the `az containerapp env create` command. If you don't provide an existing environment, the `up` command looks for one in your resource group and, if found, uses that environment. If not found, it creates an environment with a Log Analytics workspace. To learn more about the `az containerapp up` command and its options, see [`az containerapp up`](/cli/azure/containerapp#az-containerapp-up). |
container-apps | Deploy Visual Studio Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/deploy-visual-studio-code.md | Container images are stored inside container registries. You can create a contai This action opens the command palette and prompts you to define a container tag. -1. Enter a tag for the container. Accept the default, which is the project name with the *latest* suffix. +1. Enter a tag for the container. Accept the default, which is the project name with a run ID suffix. ++1. Select the Azure subscription that you want to use. 1. Select **+ Create new registry**, or if you already have a registry you'd like to use, select that item and skip to creating and deploying to the container app. Container images are stored inside container registries. You can create a contai This process may take a few moments to complete. +1. Select **Linux** as the image base operating system (OS). + Once the registry is created and the image is built successfully, you're ready to create the container app to host the published image. ## Create and deploy to the container app |
container-apps | Revisions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/revisions.md | -Azure Container Apps implements container app versioning by creating revisions. A revision is an immutable snapshot of a container app version. +Azure Container Apps implements container app versioning by creating revisions. A revision is an immutable snapshot of a container app version. - The first revision is automatically provisioned when you deploy your container app. - New revisions are automatically provisioned when you make a [*revision-scope*](#revision-scope-changes) change to your container app. Azure Container Apps implements container app versioning by creating revisions. :::image type="content" source="media/revisions/azure-container-apps-revisions.png" alt-text="Azure Container Apps: Containers"::: - ## Use cases -Container Apps revisions help you manage the release of updates to your container app by creating a new revision each time you make a *revision-scope* change to your app. You can control which revisions are active, and the external traffic that is routed to each active revision. +Container Apps revisions help you manage the release of updates to your container app by creating a new revision each time you make a *revision-scope* change to your app. You can control which revisions are active, and the external traffic that is routed to each active revision. You can use revisions to: Once the revision is verified, _running status_ is set to _running_. The revisi _Provisioning status_ values include: -- _Provisioning:_ It's being provisioned.--- _Provisioned:_ The app has been provisioned, which is the final state for provisioning status.--- _Provisioning failed:_ The app failed to provision. +- Provisioning +- Provisioned +- Provisioning failed ### Running status -After the revision is provisioned, it is running. Use _running status_ to monitor the status of a revision after a successful provision. +Revisions are fully functional after provisioning is complete. Use _running status_ to monitor the status of a revision. Running status values include: -- _Running:_ The revision is running; no issues have been identified.--- _Unhealthy:_ The revision has encountered a problem. -- Causes and urgency vary; use the revision running state details to learn more. - - Common issues include: -- - Container crashing - - Resource quota exceeded - - Image access issues, such as [_ImagePullBackOff_ errors](/troubleshoot/azure/azure-kubernetes/cannot-pull-image-from-acr-to-aks-cluster). --- _Failed:_ Critical errors cause revisions to fail. The _running state_ provides details. - - Common causes include: -- - Terminated - - Exit code 137 +| Status | Description | +||| +| Running | The revision is running. There are no issues to report. | +| Unhealthy | The revision isn't operating properly. Use the revision state details for details. Common issues include:<br>ΓÇó Container crashes<br>ΓÇó Resource quota exceeded<br>ΓÇó Image access issues, including [_ImagePullBackOff_ errors](/troubleshoot/azure/azure-kubernetes/cannot-pull-image-from-acr-to-aks-cluster) | +| Failed | The revision isn't operating properly. Use the *revision state details* for more information. Common issues include:<br>ΓÇó Container crashes<br>ΓÇó Resource quota exceeded<br>ΓÇó Image access issues, including [_ImagePullBackOff_ errors](/troubleshoot/azure/azure-kubernetes/cannot-pull-image-from-acr-to-aks-cluster) | +| Failed | Critical errors caused revisions to fail. The _running state_ provides details. Common causes include:<br>ΓÇó Termination<br>ΓÇó Exit code `137` | Use running state details to learn more about the current status. A revision can be set to active or inactive. Inactive revisions don't have provisioning or running states. -Inactive revisions remain in a list of up to 100 inactive revisions. - +Inactive revisions remain in a list of up to 100 inactive revisions. + ## Multiple revisions+ The following diagram shows a container app with two revisions. :::image type="content" source="media/revisions/azure-container-apps-revisions-traffic-split.png" alt-text="Azure Container Apps: Traffic splitting among revisions"::: These parameters include: - Credentials for private container registries - Dapr settings - ## Revision modes The revision mode controls whether only a single revision or multiple revisions of your container app can be simultaneously active. You can set your app's revision mode from your container app's **Revision management** page in the Azure portal, using Azure CLI commands, or in the ARM template. |
container-registry | Allow Access Trusted Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/allow-access-trusted-services.md | Where indicated, access by the trusted service requires additional configuration | Azure Container Instances | [Deploy to Azure Container Instances from Azure Container Registry using a managed identity](../container-instances/using-azure-container-registry-mi.md) | Yes, either system-assigned or user-assigned identity | | Microsoft Defender for Cloud | Vulnerability scanning by [Microsoft Defender for container registries](scan-images-defender.md) | No | |ACR Tasks | [Access the parent registry or a different registry from an ACR Task](container-registry-tasks-cross-registry-authentication.md) | Yes |-|Machine Learning | [Deploy](../machine-learning/how-to-deploy-custom-container.md) or [train](../machine-learning/how-to-train-with-custom-image.md) a model in a Machine Learning workspace using a custom Docker container image | Yes | +|Machine Learning | [Deploy](../machine-learning/how-to-deploy-custom-container.md) or [train](../machine-learning/v1/how-to-train-with-custom-image.md) a model in a Machine Learning workspace using a custom Docker container image | Yes | |Azure Container Registry | [Import images](container-registry-import-images.md) to or from a network-restricted Azure container registry | No | > [!NOTE] |
cosmos-db | Concepts Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/concepts-limits.md | Azure Cosmos DB supports querying items using [SQL](nosql/query/getting-started. | Maximum explicitly included paths per container| 1500 ┬╣ | | Maximum explicitly excluded paths per container| 1500 ┬╣ | | Maximum properties in a composite index| 8 |+| Maximum number of paths in a composite index| 100 | ┬╣ You can increase any of these SQL query limits by creating an [Azure Support request](create-support-request-quota-increase.md). |
cosmos-db | Vector Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/vector-search.md | Use Vector Search in Azure Cosmos DB for MongoDB vCore to seamlessly integrate y ## What is Vector search? -Vector search is a method that helps you find similar items based on their data characteristics rather than exact matches on a property field. This technique is useful in applications such as searching for similar text, finding related images, making recommendations, or even detecting anomalies. It works by taking the vector representations (lists of numbers) of your data that you have created using an ML model, or an embeddings API. Examples of embeddings APIs could be [Azure OpenAI Embeddings](https://github.com/cognitive-services/openai/tutorials/embeddings.md) or [Hugging Face on Azure](https://azure.microsoft.com/solutions/hugging-face-on-azure/). It then measures the distance between the data vectors and your query vector. The data vectors that are closest to your query vector are the ones that are found to be most similar semantically. +Vector search is a method that helps you find similar items based on their data characteristics rather than exact matches on a property field. This technique is useful in applications such as searching for similar text, finding related images, making recommendations, or even detecting anomalies. It works by taking the vector representations (lists of numbers) of your data that you have created using an ML model, or an embeddings API. Examples of embeddings APIs could be [Azure OpenAI Embeddings](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/embeddings) or [Hugging Face on Azure](https://azure.microsoft.com/solutions/hugging-face-on-azure/). It then measures the distance between the data vectors and your query vector. The data vectors that are closest to your query vector are the ones that are found to be most similar semantically. By integrating vector search capabilities natively, you can now unlock the full potential of your data in applications built on top of the OpenAI API. You can also create custom-built solutions that use vector embeddings. |
cosmos-db | Materialized Views | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/materialized-views.md | Title: Materialized views (preview) description: Efficiently query a base container with predefined filters using Materialized views for Azure Cosmos DB for NoSQL.-+ Previously updated : 05/10/2023 Last updated : 06/01/2023 # Materialized views for Azure Cosmos DB for NoSQL (preview) Once your account and Materialized View Builder is set up, you should be able to "kind": "Hash" }, "materializedViewDefinition": {- "sourceCollectionName": "mv-src", + "sourceCollectionId": "mv-src", "definition": "SELECT s.accountId, s.emailAddress, CONCAT(s.name.first, s.name.last) FROM s" } }, Once your account and Materialized View Builder is set up, you should be able to az rest \ --method PUT \ --uri "https://management.azure.com$accountIdsqlDatabases/";\- URL6="$databaseName/containers/$materializedViewName?api-version=2022-11-15-preview" \ + "$databaseName/containers/$materializedViewName?api-version=2022-11-15-preview" \ --body @definition.json \ --headers content-type=application/json ``` Once your account and Materialized View Builder is set up, you should be able to az rest \ --method GET \ --uri "https://management.azure.com$accountIdsqlDatabases/";\- URL6="$databaseName/containers/$materializedViewName?api-version=2022-11-15-preview" \ + "$databaseName/containers/$materializedViewName?api-version=2022-11-15-preview" \ --headers content-type=application/json \ --query "{mvCreateStatus: properties.Status}" ``` |
cosmos-db | Migrate Passwordless | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/migrate-passwordless.md | description: Learn to migrate existing applications away from connection strings Previously updated : 04/05/2023 Last updated : 06/01/2023 -+ # Migrate an application to use passwordless connections with Azure Cosmos DB for NoSQL The following tutorial explains how to migrate an existing application to connec ### Migrate the app code to use passwordless connections +## [.NET](#tab/dotnet) + 1. To use `DefaultAzureCredential` in a .NET application, install the `Azure.Identity` package: ```dotnetcli The following tutorial explains how to migrate an existing application to connec 1. Identify the locations in your code that create a `CosmosClient` object to connect to Azure Cosmos DB. Update your code to match the following example. - ```csharp + ```csharp + DefaultAzureCredential credential = new(); + using CosmosClient client = new( accountEndpoint: Environment.GetEnvironmentVariable("COSMOS_ENDPOINT"),- tokenCredential: new DefaultAzureCredential() + tokenCredential: credential );- ``` + ``` ++## [Go](#tab/go) ++1. To use `DefaultAzureCredential` in a Go application, install the `azidentity` module: ++ ```bash + go get -u github.com/Azure/azure-sdk-for-go/sdk/azidentity + ``` ++1. At the top of your file, add the following code: ++ ```go + import ( + "github.com/Azure/azure-sdk-for-go/sdk/azidentity" + ) + ``` ++1. Identify the locations in your code that create a `Client` instance to connect to Azure Cosmos DB. Update your code to match the following example: ++ ```go + cred, err := azidentity.NewDefaultAzureCredential(nil) + if err != nil { + // handle error + } ++ endpoint := os.Getenv("COSMOS_ENDPOINT") + client, err := azblob.NewClient(endpoint, cred, nil) + if err != nil { + // handle error + } + ``` ++## [Java](#tab/java) ++1. To use `DefaultAzureCredential` in a Java application, install the `azure-identity` package via one of the following approaches: + 1. [Include the BOM file](/java/api/overview/azure/identity-readme?view=azure-java-stable&preserve-view=true#include-the-bom-file). + 1. [Include a direct dependency](/java/api/overview/azure/identity-readme?view=azure-java-stable&preserve-view=true#include-direct-dependency). ++1. At the top of your file, add the following code: ++ ```java + import com.azure.identity.DefaultAzureCredentialBuilder; + ``` ++1. Identify the locations in your code that create a `CosmosClient` or `CosmosAsyncClient` object to connect to Azure Cosmos DB. Update your code to match the following example: ++ ```java + DefaultAzureCredential credential = new DefaultAzureCredentialBuilder() + .build(); + String endpoint = System.getenv("COSMOS_ENDPOINT"); + + CosmosClient client = new CosmosClientBuilder() + .endpoint(endpoint) + .credential(credential) + .consistencyLevel(ConsistencyLevel.EVENTUAL) + .buildClient(); + ``` ++## [Node.js](#tab/nodejs) ++1. To use `DefaultAzureCredential` in a Node.js application, install the `@azure/identity` package: ++ ```bash + npm install --save @azure/identity + ``` ++1. At the top of your file, add the following code: ++ ```nodejs + import { DefaultAzureCredential } from "@azure/identity"; + ``` ++1. Identify the locations in your code that create a `CosmosClient` object to connect to Azure Cosmos DB. Update your code to match the following example: ++ ```nodejs + const credential = new DefaultAzureCredential(); + const endpoint = process.env.COSMOS_ENDPOINT; ++ const cosmosClient = new CosmosClient({ + endpoint, + aadCredentials: credential + }); + ``` ++## [Python](#tab/python) ++1. To use `DefaultAzureCredential` in a Python application, install the `azure-identity` package: + + ```bash + pip install azure-identity + ``` ++1. At the top of your file, add the following code: ++ ```python + from azure.identity import DefaultAzureCredential + ``` ++1. Identify the locations in your code that create a `BlobServiceClient` object to connect to Azure Blob Storage. Update your code to match the following example: ++ ```python + credential = DefaultAzureCredential() + endpoint = os.environ["COSMOS_ENDPOINT"] ++ client = CosmosClient( + url = endpoint, + credential = credential + ) + ``` ++ ### Run the app locally az role assignment create \ --scope "<cosmosdb-resource-id>" ``` -### Update the application code --You need to configure your application code to look for the specific managed identity you created when it's deployed to Azure. In some scenarios, explicitly setting the managed identity for the app also prevents other environment identities from accidentally being detected and used automatically. --1. On the managed identity overview page, copy the client ID value to your clipboard. -1. Update the `DefaultAzureCredential` object to specify this managed identity client ID: -- ```csharp - // TODO: Update the <managed-identity-client-id> placeholder. - var credential = new DefaultAzureCredential( - new DefaultAzureCredentialOptions - { - ManagedIdentityClientId = "<managed-identity-client-id>" - }); - ``` --3. Redeploy your code to Azure after making this change in order for the configuration updates to be applied. ### Test the app In this tutorial, you learned how to migrate an application to passwordless conn You can read the following resources to explore the concepts discussed in this article in more depth: * [Authorize access to blobs using Azure Active Directory](../../storage/blobs/authorize-access-azure-active-directory.md))-* To learn more about .NET, see [Get started with .NET in 10 minutes](https://dotnet.microsoft.com/learn/dotnet/hello-world-tutorial/intro). +* To learn more about .NET, see [Get started with .NET in 10 minutes](https://dotnet.microsoft.com/learn/dotnet/hello-world-tutorial/intro). |
cosmos-db | Quickstart Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-java.md | There are several advanced scenarios that benefit from client-side throughput co - **Load balancing of throughput between different Azure Cosmos DB clients** - in some use cases, it's important to make sure all the clients get a fair (equal) share of the throughput +> [!WARNING] +> Please note that throughput control is not yet supported for gateway mode. +> Currently, for [serverless Azure Cosmos DB accounts](../serverless.md), attempting to use `targetThroughputThreshold` to define a percentage will result in failure. You can only provide an absolute value for target throughput/RU using `targetThroughput`. + ### Global throughput control Global throughput control in the Java SDK is configured by first creating a container that will define throughput control metadata. This container must have a partition key of `groupId`, and `ttl` enabled. Assuming you already have objects for client, database, and container as defined in the examples above, you can create this container as below. Here we name the container `ThroughputControl`: |
cosmos-db | Throughput Control Spark | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/throughput-control-spark.md | The [Spark Connector](quickstart-spark.md) allows you to communicate with Azure > This article documents the use of global throughput control groups in the Azure Cosmos DB Spark Connector, but the functionality is also available in the [Java SDK](./sdk-java-v4.md). In the SDK, you can also use both global and local Throughput Control groups to limit the RU consumption in the context of a single client connection instance. For example, you can apply this to different operations within a single microservice, or maybe to a single data loading program. Take a look at documentation on how to [use throughput control](quickstart-java.md#use-throughput-control) in the Java SDK. > [!WARNING]-> Please note that throughput control is not yet supported for gateway mode. +> Please note that throughput control is not yet supported for gateway mode. +> Currently, for [serverless Azure Cosmos DB accounts](../serverless.md), attempting to use `targetThroughputThreshold` to define a percentage will result in failure. You can only provide an absolute value for target throughput/RU using `spark.cosmos.throughputControl.targetThroughput`. ## Why is throughput control important? |
cost-management-billing | Assign Roles Azure Service Principals | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/assign-roles-azure-service-principals.md | tags: billing Previously updated : 01/18/2023 Last updated : 05/31/2023 Now you can use the SPN to automatically access EA APIs. The SPN has the Departm Now you can use the SPN to automatically access EA APIs. The SPN has the SubscriptionCreator role. +## Verify SPN role assignments ++SPN role assignments are not visible in the Azure portal. You can view enrollment account role assignments, including the subscription creator role, with the [Billing Role Assignments - List By Enrollment Account - REST API (Azure Billing)](/rest/api/billing/2019-10-01-preview/billing-role-assignments/list-by-enrollment-account) API. Use the API to verify that the role assignment was successful. + ## Troubleshoot You must identify and use the Enterprise application object ID where you granted the EA role. If you use the Object ID from some other application, API calls will fail. Verify that youΓÇÖre using the correct Enterprise application object ID. |
cost-management-billing | Reservation Utilization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-utilization.md | There are two options for Power BI users: - Cost Management connector for Power BI Desktop - Reservation purchase date and utilization data are available in the [Cost Management connector for Power BI Desktop](/power-bi/desktop-connect-azure-cost-management). Create the reports you want by using the connector. - Cost Management Power BI App - Use the [Cost Management Power BI App](https://appsource.microsoft.com/product/power-bi/costmanagement.azurecostmanagementapp) for pre-created reports that you can further customize. +## Set alerts on utilization ++With reservation utilization alerts, you can promptly take remedial actions to ensure optimal utilization of your reservation purchases. To learn more, see [Reservation utilization alerts](../costs/reservation-utilization-alerts.md). ++ ## Next steps - [Manage Azure Reservations](manage-reserved-vm-instance.md). |
data-factory | Create Self Hosted Integration Runtime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-self-hosted-integration-runtime.md | You also need to make sure that Microsoft Azure is in your company's allowlist. - Public: https://www.microsoft.com/download/details.aspx?id=56519 - US Gov: https://www.microsoft.com/download/details.aspx?id=57063 - Germany: https://www.microsoft.com/download/details.aspx?id=57064 - - China: https://www.microsoft.com/download/details.aspx?id=57062 + - China: https://www.microsoft.com/download/details.aspx?id=57062 ++### Configure proxy server settings when using a private endpoint ++If your company's network architure involves the use of private endpoints and for security reasons, and your company's policy does not allow a direct internet connection from the VM hosting the Self Hosted Integration Runtime to the Azure Data Factory service URL, then you will need to allow bypass the ADF Service URL for full connectivity. The following procedure provides instructions for updating the diahost.exe.config file. You should also repeat these steps for the diawp.exe.config file. ++1. In File Explorer, make a safe copy of _C:\Program Files\Microsoft Integration Runtime\4.0\Shared\diahost.exe.config_ as a backup of the original file. +1. Open Notepad running as administrator. +1. In Notepad, open _C:\Program Files\Microsoft Integration Runtime\4.0\Shared\diahost.exe.config_. +1. Find the default **system.net** tag as shown here: ++ ```xml + <system.net> + <defaultProxy useDefaultCredentials="true" /> + </system.net> + ``` ++ You can then add bypasslist details as shown in the following example: ++ ```xml + <system.net> + <defaultProxy> + <bypasslist> + <add address = "[adfresourcename].[adfresourcelocation].datafactory.azure.net" /> + </bypasslist> + <proxy + usesystemdefault="True" + proxyaddress="http://proxy.domain.org:8888/" + bypassonlocal="True" + /> + </defaultProxy> + </system.net> + ``` ### Possible symptoms for issues related to the firewall and proxy server |
data-factory | Whats New Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new-archive.md | This archive page retains updates from older months. Check out our [What's New video archive](https://www.youtube.com/playlist?list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv) for all of our monthly update +## October 2022 ++### Video summary ++> [!VIDEO https://www.youtube.com/embed?v=Ou90M59VQCA&list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv&index=7] + +### Data flow ++- Export up to 1000 rows from data flow preview [Learn more](concepts-data-flow-debug-mode.md?tabs=data-factory#data-preview) +- SQL CDC in Mapping Data Flows now available (Public Preview) [Learn more](connector-sql-server.md?tabs=data-factory#native-change-data-capture) +- Unlock advanced analytics with Microsoft 365 Mapping Data Flow Connector [Learn more](https://devblogs.microsoft.com/microsoft365dev/scale-access-to-microsoft-365-data-with-microsoft-graph-data-connect/) +- SAP Change Data Capture (CDC) in now generally available [Learn more](connector-sap-change-data-capture.md#transform-data-with-the-sap-cdc-connector) ++### Developer productivity ++- Now accepting community contributions to Template Gallery [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-azure-data-factory-community-templates/ba-p/3650989) +- New design in Azure portal – easily discover how to launch ADF Studio [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/improved-ui-for-launching-azure-data-factory-studio/ba-p/3659610) +- Learning Center now available in the Azure Data Factory studio [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-the-learning-center-to-azure-data-factory-studio/ba-p/3660888) +- One-click to try Azure Data Factory [Learn more](quickstart-get-started.md) ++### Orchestration ++- Granular billing view available for ADF – see detailed billing information by pipeline (Public Preview) [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/granular-billing-for-azure-data-factory/ba-p/3654600) +- Script activity execution timeout now configurable [Learn more](transform-data-using-script.md) ++### Region expansion ++Continued region expansion – Qatar Central now supported [Learn more](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=data-factory) ++### Continuous integration and continuous deployment ++Exclude pipeline triggers that did not change in deployment now generally available [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/ci-cd-improvements-related-to-pipeline-triggers-deployment/ba-p/3605064) ++## September 2022 ++### Video summary ++> [!VIDEO https://www.youtube.com/embed?v=Bh_VA8n-SL8&list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv&index=6] ++### Data flow ++- Amazon S3 source connector added [Learn more](connector-amazon-simple-storage-service.md?tabs=data-factory) +- Google Sheets REST-based connector added as Source (Preview) [Learn more](connector-google-sheets.md?tabs=data-factory) +- Maximum column optimization in dataflow [Learn more](format-delimited-text.md#mapping-data-flow-properties) +- SAP Change Data Capture capabilities in Mapping Data Flow (Preview) - Extract and transform data changes from SAP systems for a more efficient data refresh [Learn more](connector-sap-change-data-capture.md#transform-data-with-the-sap-cdc-connector) +- Writing data to a lookup field via alternative keys supported in Dynamics 365/CRM connectors for mapping data flows [Learn more](connector-dynamics-crm-office-365.md?tabs=data-factory#writing-data-to-a-lookup-field-via-alternative-keys) ++### Data movement ++Support to convert Oracle NUMBER type to corresponding integer in source [Learn more](connector-oracle.md?tabs=data-factory#oracle-as-source) ++### Monitoring ++- Additional monitoring improvements in Azure Data Factory [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/further-adf-monitoring-improvements/ba-p/3607669) + - Monitoring loading improvements - pipeline re-run groupings data fetched only when expanded + - Pagination added to pipeline activity runs view to show all activity records in pipeline run + - Monitoring consumption improvement – loading icon added to know when consumption report is fully calculated + - Additional sorting columns in monitoring – sorting added for Pipeline name, Run End, and Status + - Time-zone settings now saved in monitoring +- Gantt chart view now supported in IR monitoring [Learn more](monitor-integration-runtime.md) ++### Orchestration ++DELETE method in the Web activity now supports sending a body with HTTP request [Learn more](control-flow-web-activity.md#type-properties) ++### User interface ++- Native UI support of parameterization added for 6 additional linked services – SAP ODP, ODBC, Microsoft Access, Informix, Snowflake, and DB2 [Learn more](parameterize-linked-services.md?tabs=data-factory#supported-linked-service-types) +- Pipeline designer enhancements added in Studio Preview experience – users can view workflow inside pipeline objects like For Each, If Then, etc. [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/azure-data-factory-updated-pipeline-designer/ba-p/3618755) ++## August 2022 ++### Video summary ++> [!VIDEO https://www.youtube.com/embed?v=KCJ2F6Y_nfo&list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv&index=5] ++### Data flow +- Appfigures connector added as Source (Preview) [Learn more](connector-appfigures.md) +- Cast transformation added – visually convert data types [Learn more](data-flow-cast.md) +- New UI for inline datasets - categories added to easily find data sources [Learn more](data-flow-source.md#inline-datasets) ++### Data movement +Service principal authentication type added for Azure Blob storage [Learn more](connector-azure-blob-storage.md?tabs=data-factory#service-principal-authentication) ++### Developer productivity +- Default activity time-out changed from 7 days to 12 hours [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/azure-data-factory-changing-default-pipeline-activity-timeout/ba-p/3598729) +- New data factory creation experience - one click to have your factory ready within seconds [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/new-experience-for-creating-data-factory-within-seconds/ba-p/3561249) +- Expression builder UI update – categorical tabs added for easier use [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/coming-soon-to-adf-more-pipeline-expression-builder-ease-of-use/ba-p/3567196) ++### Continuous integration and continuous delivery (CI/CD) +When CI/CD integrating ARM template, instead of turning off all triggers, it can exclude triggers that didn't change in deployment [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/ci-cd-improvements-related-to-pipeline-triggers-deployment/ba-p/3605064) ++## July 2022 ++### Video summary ++> [!VIDEO https://www.youtube.com/embed?v=EOVVt4qYvZI&list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv&index=4] ++### Data flow ++- Asana connector added as source [Learn more](connector-asana.md) +- Three new data transformation functions now supported [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/3-new-data-transformation-functions-in-adf/ba-p/3582738) + - [collectUnique()](data-flow-expressions-usage.md#collectUnique) - Create a new collection of unique values in an array. + - [substringIndex()](data-flow-expressions-usage.md#substringIndex) - Extract the substring before n occurrences of a delimiter. + - [topN()](data-flow-expressions-usage.md#topN) - Return the top n results after sorting your data. +- Refetch from source available in Refresh for data source change scenarios [Learn more](concepts-data-flow-debug-mode.md#data-preview) +- User defined functions (GA) - Create reusable and customized expressions to avoid building complex logic over and over [Learn more](concepts-data-flow-udf.md) [Video](https://www.youtube.com/watch?v=ZFTVoe8eeOc&t=170s) +- Easier configuration on data flow runtime - choose compute size among Small, Medium and Large to pre-configure all integration runtime settings [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/adf-makes-it-easy-to-select-azure-ir-size-for-data-flows/ba-p/3578033) ++### Continuous integration and continuous delivery (CI/CD) ++Include Global parameters supported in ARM template. [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/ci-cd-improvement-using-global-parameters-in-azure-data-factory/ba-p/3557265#M665) +### Developer productivity ++Be a part of Azure Data Factory studio preview features - Experience the latest Azure Data Factory capabilities and be the first to share your feedback [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-the-azure-data-factory-studio-preview-experience/ba-p/3563880) + ## June 2022 ### Video summary |
data-factory | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new.md | This page is updated monthly, so revisit it regularly. For older months' update Check out our [What's New video archive](https://www.youtube.com/playlist?list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv) for all of our monthly update videos. +## April 2023 ++### Data flow ++Easily unroll multiple arrays in ADF data flows. ADF updated the **Flatten** transformation that now makes it super easy to unroll multiple arrays from a single **Flatten** transformation step. [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/unroll-multiple-arrays-in-a-single-flatten-step-in-adf/ba-p/3802457) ++### Continuous integration and continuous deployment ++You can customize the commit message in Git mode now. Type in a detailed description about the changes you make, and we will save it to Git repository. ++### Connectors ++The Azure Blob Storage connector now supports anonymous authentication. [Learn more](connector-azure-blob-storage.md#anonymous-authentication) ++## March 2023 ++### Connectors ++Azure Data Lake Storage Gen2 connector now supports shared access signature authentication. [Learn more](connector-azure-data-lake-storage.md#shared-access-signature-authentication) + ## February 2023 ### Data movement Continued region expansion - Azure Data Factory is now available in China North In auto publish config, disable publish button is available to void overwriting the last automated publish deployment [Learn more](source-control.md?tabs=data-factory#editing-repo-settings) --## October 2022 --### Video summary --> [!VIDEO https://www.youtube.com/embed?v=Ou90M59VQCA&list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv&index=7] - -### Data flow --- Export up to 1000 rows from data flow preview [Learn more](concepts-data-flow-debug-mode.md?tabs=data-factory#data-preview)-- SQL CDC in Mapping Data Flows now available (Public Preview) [Learn more](connector-sql-server.md?tabs=data-factory#native-change-data-capture)-- Unlock advanced analytics with Microsoft 365 Mapping Data Flow Connector [Learn more](https://devblogs.microsoft.com/microsoft365dev/scale-access-to-microsoft-365-data-with-microsoft-graph-data-connect/)-- SAP Change Data Capture (CDC) in now generally available [Learn more](connector-sap-change-data-capture.md#transform-data-with-the-sap-cdc-connector)--### Developer productivity --- Now accepting community contributions to Template Gallery [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-azure-data-factory-community-templates/ba-p/3650989)-- New design in Azure portal – easily discover how to launch ADF Studio [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/improved-ui-for-launching-azure-data-factory-studio/ba-p/3659610)-- Learning Center now available in the Azure Data Factory studio [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-the-learning-center-to-azure-data-factory-studio/ba-p/3660888)-- One-click to try Azure Data Factory [Learn more](quickstart-get-started.md)--### Orchestration --- Granular billing view available for ADF – see detailed billing information by pipeline (Public Preview) [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/granular-billing-for-azure-data-factory/ba-p/3654600)-- Script activity execution timeout now configurable [Learn more](transform-data-using-script.md)--### Region expansion --Continued region expansion – Qatar Central now supported [Learn more](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=data-factory) --### Continuous integration and continuous deployment --Exclude pipeline triggers that did not change in deployment now generally available [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/ci-cd-improvements-related-to-pipeline-triggers-deployment/ba-p/3605064) --## September 2022 --### Video summary --> [!VIDEO https://www.youtube.com/embed?v=Bh_VA8n-SL8&list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv&index=6] --### Data flow --- Amazon S3 source connector added [Learn more](connector-amazon-simple-storage-service.md?tabs=data-factory)-- Google Sheets REST-based connector added as Source (Preview) [Learn more](connector-google-sheets.md?tabs=data-factory)-- Maximum column optimization in dataflow [Learn more](format-delimited-text.md#mapping-data-flow-properties)-- SAP Change Data Capture capabilities in Mapping Data Flow (Preview) - Extract and transform data changes from SAP systems for a more efficient data refresh [Learn more](connector-sap-change-data-capture.md#transform-data-with-the-sap-cdc-connector)-- Writing data to a lookup field via alternative keys supported in Dynamics 365/CRM connectors for mapping data flows [Learn more](connector-dynamics-crm-office-365.md?tabs=data-factory#writing-data-to-a-lookup-field-via-alternative-keys)--### Data movement --Support to convert Oracle NUMBER type to corresponding integer in source [Learn more](connector-oracle.md?tabs=data-factory#oracle-as-source) --### Monitoring --- Additional monitoring improvements in Azure Data Factory [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/further-adf-monitoring-improvements/ba-p/3607669)- - Monitoring loading improvements - pipeline re-run groupings data fetched only when expanded - - Pagination added to pipeline activity runs view to show all activity records in pipeline run - - Monitoring consumption improvement – loading icon added to know when consumption report is fully calculated - - Additional sorting columns in monitoring – sorting added for Pipeline name, Run End, and Status - - Time-zone settings now saved in monitoring -- Gantt chart view now supported in IR monitoring [Learn more](monitor-integration-runtime.md)--### Orchestration --DELETE method in the Web activity now supports sending a body with HTTP request [Learn more](control-flow-web-activity.md#type-properties) --### User interface --- Native UI support of parameterization added for 6 additional linked services – SAP ODP, ODBC, Microsoft Access, Informix, Snowflake, and DB2 [Learn more](parameterize-linked-services.md?tabs=data-factory#supported-linked-service-types)-- Pipeline designer enhancements added in Studio Preview experience – users can view workflow inside pipeline objects like For Each, If Then, etc. [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/azure-data-factory-updated-pipeline-designer/ba-p/3618755)--## August 2022 --### Video summary --> [!VIDEO https://www.youtube.com/embed?v=KCJ2F6Y_nfo&list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv&index=5] --### Data flow -- Appfigures connector added as Source (Preview) [Learn more](connector-appfigures.md)-- Cast transformation added – visually convert data types [Learn more](data-flow-cast.md)-- New UI for inline datasets - categories added to easily find data sources [Learn more](data-flow-source.md#inline-datasets)--### Data movement -Service principal authentication type added for Azure Blob storage [Learn more](connector-azure-blob-storage.md?tabs=data-factory#service-principal-authentication) --### Developer productivity -- Default activity time-out changed from 7 days to 12 hours [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/azure-data-factory-changing-default-pipeline-activity-timeout/ba-p/3598729)-- New data factory creation experience - one click to have your factory ready within seconds [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/new-experience-for-creating-data-factory-within-seconds/ba-p/3561249)-- Expression builder UI update – categorical tabs added for easier use [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/coming-soon-to-adf-more-pipeline-expression-builder-ease-of-use/ba-p/3567196)--### Continuous integration and continuous delivery (CI/CD) -When CI/CD integrating ARM template, instead of turning off all triggers, it can exclude triggers that didn't change in deployment [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/ci-cd-improvements-related-to-pipeline-triggers-deployment/ba-p/3605064) --## July 2022 --### Video summary --> [!VIDEO https://www.youtube.com/embed?v=EOVVt4qYvZI&list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv&index=4] --### Data flow --- Asana connector added as source [Learn more](connector-asana.md)-- Three new data transformation functions now supported [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/3-new-data-transformation-functions-in-adf/ba-p/3582738)- - [collectUnique()](data-flow-expressions-usage.md#collectUnique) - Create a new collection of unique values in an array. - - [substringIndex()](data-flow-expressions-usage.md#substringIndex) - Extract the substring before n occurrences of a delimiter. - - [topN()](data-flow-expressions-usage.md#topN) - Return the top n results after sorting your data. -- Refetch from source available in Refresh for data source change scenarios [Learn more](concepts-data-flow-debug-mode.md#data-preview)-- User defined functions (GA) - Create reusable and customized expressions to avoid building complex logic over and over [Learn more](concepts-data-flow-udf.md) [Video](https://www.youtube.com/watch?v=ZFTVoe8eeOc&t=170s)-- Easier configuration on data flow runtime - choose compute size among Small, Medium and Large to pre-configure all integration runtime settings [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/adf-makes-it-easy-to-select-azure-ir-size-for-data-flows/ba-p/3578033)--### Continuous integration and continuous delivery (CI/CD) --Include Global parameters supported in ARM template. [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/ci-cd-improvement-using-global-parameters-in-azure-data-factory/ba-p/3557265#M665) -### Developer productivity --Be a part of Azure Data Factory studio preview features - Experience the latest Azure Data Factory capabilities and be the first to share your feedback [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-the-azure-data-factory-studio-preview-experience/ba-p/3563880) - ## More information - [What's new archive](whats-new-archive.md) |
defender-for-cloud | Alerts Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-overview.md | description: Learn how Microsoft Defender for Cloud generates security alerts an Previously updated : 11/29/2022 Last updated : 05/29/2023 # Security alerts and incidents-Security alerts are the notifications generated by Defender for Cloud's workload protection plans when threats are identified in your Azure, hybrid, or multi-cloud environments. +Security alerts are the notifications generated by Defender for Cloud's workload protection plans when threats are identified in your Azure, hybrid, or multicloud environments. - Security alerts are triggered by advanced detections available when you enable [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads) for specific resource types. - Each alert provides details of affected resources, issues, and remediation steps. Alerts have a severity level assigned to help prioritize how to attend to each a | Severity | Recommended response | |-|| | **High** | There is a high probability that your resource is compromised. You should look into it right away. Defender for Cloud has high confidence in both the malicious intent and in the findings used to issue the alert. For example, an alert that detects the execution of a known malicious tool such as Mimikatz, a common tool used for credential theft. |-| **Medium** | This is probably a suspicious activity might indicate that a resource is compromised. Defender for Cloud's confidence in the analytic or finding is medium and the confidence of the malicious intent is medium to high. These would usually be machine learning or anomaly-based detections, for example a sign-in attempt from an unusual location. | +| **Medium** | This is probably a suspicious activity might indicate that a resource is compromised. Defender for Cloud's confidence in the analytic or finding is medium and the confidence of the malicious intent is medium to high. These would usually be machine learning or anomaly based detections, for example a sign-in attempt from an unusual location. | | **Low** | This might be a benign positive or a blocked attack. Defender for Cloud isn't confident enough that the intent is malicious and the activity might be innocent. For example, log clear is an action that might happen when an attacker tries to hide their tracks, but in many cases is a routine operation performed by admins. Defender for Cloud doesn't usually tell you when attacks were blocked, unless it's an interesting case that we suggest you look into. | | **Informational** | An incident is typically made up of a number of alerts, some of which might appear on their own to be only informational, but in the context of the other alerts might be worthy of a closer look. | Defender for Cloud correlates alerts and contextual signals into incidents. - By using the information gathered for each step of an attack, Defender for Cloud can also rule out activity that appears to be steps of an attack, but actually isn't. > [!TIP]-> In the [alerts reference](alerts-reference.md#alerts-fusion), review the list of security incident alerts that can be produced by incident correlation. +> In the [incidents reference](incidents-reference.md), review the list of security incident that can be produced by incident correlation. <a name="detect-threats"> </a> |
defender-for-cloud | Alerts Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md | Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in |**Antimalware unusual file exclusion in your virtual machine**<br>(VM_UnusualAmFileExclusion) | Unusual file exclusion from antimalware extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running arbitrary code or infecting the machine with malware. | Defense Evasion | Medium | |**Behavior similar to ransomware detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected the execution of files that have resemblance of known ransomware that can prevent users from accessing their system or personal files, and demands ransom payment in order to regain access. This behavior was seen [x] times today on the following machines: [Machine names]|-|High| |**Communication with suspicious domain identified by threat intelligence**<br>(AzureDNS_ThreatIntelSuspectDomain) | Communication with suspicious domain was detected by analyzing DNS transactions from your resource and comparing against known malicious domains identified by threat intelligence feeds. Communication to malicious domains is frequently performed by attackers and could imply that your resource is compromised. | Initial Access, Persistence, Execution, Command And Control, Exploitation | Medium |-|**Container with a miner image detected**<br>(VM_MinerInContainerImage) | Machine logs indicate execution of a Docker container that run an image associated with a digital currency mining. | Execution | High | +|**Container with a miner image detected**<br>(VM_MinerInContainerImage) | Machine logs indicate execution of a Docker container that runs an image associated with a digital currency mining. | Execution | High | |**Custom script extension with suspicious command in your virtual machine**<br>(VM_CustomScriptExtensionSuspiciousCmd) | Custom script extension with suspicious command was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers may use custom script extension to execute a malicious code on your virtual machine via the Azure Resource Manager. | Execution | Medium | |**Custom script extension with suspicious entry-point in your virtual machine**<br>(VM_CustomScriptExtensionSuspiciousEntryPoint) | Custom script extension with a suspicious entry-point was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. The entry-point refers to a suspicious GitHub repository.<br>Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium | |**Custom script extension with suspicious payload in your virtual machine**<br>(VM_CustomScriptExtensionSuspiciousPayload) | Custom script extension with a payload from a suspicious GitHub repository was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium | Microsoft Defender for Containers provides security alerts on the cluster level | **Login from a domain not seen in 60 days**<br>(SQL.DB_DomainAnomaly<br>SQL.VM_DomainAnomaly<br>SQL.DW_DomainAnomaly<br>SQL.MI_DomainAnomaly<br>Synapse.SQLPool_DomainAnomaly) | A user has logged in to your resource from a domain no other users have connected from in the last 60 days. If this resource is new or this is expected behavior caused by recent changes in the users accessing the resource, Defender for Cloud will identify significant changes to the access patterns and attempt to prevent future false positives. | Exploitation | Medium | | **Login from a suspicious IP**<br>(SQL.DB_SuspiciousIpAnomaly<br>SQL.VM_SuspiciousIpAnomaly<br>SQL.DW_SuspiciousIpAnomaly<br>SQL.MI_SuspiciousIpAnomaly<br>Synapse.SQLPool_SuspiciousIpAnomaly) | Your resource has been accessed successfully from an IP address that Microsoft Threat Intelligence has associated with suspicious activity. | PreAttack | Medium | | **Potential SQL injection**<br>(SQL.DB_PotentialSqlInjection<br>SQL.VM_PotentialSqlInjection<br>SQL.MI_PotentialSqlInjection<br>SQL.DW_PotentialSqlInjection<br>Synapse.SQLPool_PotentialSqlInjection) | An active exploit has occurred against an identified application vulnerable to SQL injection. This means an attacker is trying to inject malicious SQL statements by using the vulnerable application code or stored procedures. | PreAttack | High |-| **Suspected brute force attack using a valid user**<br>(SQL.DB_BruteForce<br>SQL.VM_BruteForce<br>SQL.DW_BruteForce<br>SQL.MI_BruteForce<br>Synapse.SQLPool_BruteForce) | A potential brute force attack has been detected on your resource. The attacker is using the valid user (username), which has permissions to login. | PreAttack | High | +| **Suspected brute force attack using a valid user**<br>(SQL.DB_BruteForce<br>SQL.VM_BruteForce<br>SQL.DW_BruteForce<br>SQL.MI_BruteForce<br>Synapse.SQLPool_BruteForce) | A potential brute force attack has been detected on your resource. The attacker is using the valid user (username), which has permissions to log in. | PreAttack | High | | **Suspected brute force attack**<br>(SQL.DB_BruteForce<br>SQL.VM_BruteForce<br>SQL.DW_BruteForce<br>SQL.MI_BruteForce<br>Synapse.SQLPool_BruteForce) | A potential brute force attack has been detected on your resource. | PreAttack | High | | **Suspected successful brute force attack**<br>(SQL.DB_BruteForce<br>SQL.VM_BruteForce<br>SQL.DW_BruteForce<br>SQL.MI_BruteForce<br>Synapse.SQLPool_BruteForce) | A successful login occurred after an apparent brute force attack on your resource. | PreAttack | High | | **SQL Server potentially spawned a Windows command shell and accessed an abnormal external source**<br>(SQL.DB_ShellExternalSourceAnomaly<br>SQL.VM_ShellExternalSourceAnomaly<br>SQL.DW_ShellExternalSourceAnomaly<br>SQL.MI_ShellExternalSourceAnomaly<br>Synapse.SQLPool_ShellExternalSourceAnomaly) | A suspicious SQL statement potentially spawned a Windows command shell with an external source that hasn't been seen before. Executing a shell that accesses an external source is a method used by attackers to download malicious payload and then execute it on the machine and compromise it. This enables an attacker to perform malicious tasks under remote direction. Alternatively, accessing an external source can be used to exfiltrate data to an external destination. | Execution | High | Microsoft Defender for Containers provides security alerts on the cluster level | Alert (alert type) | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity | |-||-|-|-| **Suspected brute force attack using a valid user**<br>(SQL.PostgreSQL_BruteForce<br>SQL.MariaDB_BruteForce<br>SQL.MySQL_BruteForce) | A potential brute force attack has been detected on your resource. The attacker is using the valid user (username), which has permissions to login. | PreAttack | High | +| **Suspected brute force attack using a valid user**<br>(SQL.PostgreSQL_BruteForce<br>SQL.MariaDB_BruteForce<br>SQL.MySQL_BruteForce) | A potential brute force attack has been detected on your resource. The attacker is using the valid user (username), which has permissions to log in. | PreAttack | High | | **Suspected successful brute force attack**<br>(SQL.PostgreSQL_BruteForce<br>SQL.MySQL_BruteForce<br>SQL.MariaDB_BruteForce) | A successful login occurred after an apparent brute force attack on your resource. | PreAttack | High | | **Suspected brute force attack**<br>(SQL.PostgreSQL_BruteForce<br>SQL.MySQL_BruteForce<br>SQL.MariaDB_BruteForce) | A potential brute force attack has been detected on your resource. | PreAttack | High | | **Attempted logon by a potentially harmful application**<br>(SQL.PostgreSQL_HarmfulApplication<br>SQL.MariaDB_HarmfulApplication<br>SQL.MySQL_HarmfulApplication) | A potentially harmful application attempted to access your resource. | PreAttack | High | Microsoft Defender for Containers provides security alerts on the cluster level | **DDoS Attack detected for Public IP**<br>(NETWORK_DDOS_DETECTED) | DDoS Attack detected for Public IP (IP address) and being mitigated. | Probing | High | | **DDoS Attack mitigated for Public IP**<br>(NETWORK_DDOS_MITIGATED) | DDoS Attack mitigated for Public IP (IP address). | Probing | Low | --## <a name="alerts-fusion"></a>Security incident --[Further details and notes](alerts-overview.md#what-are-security-incidents) --| Alert | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity | -||-|:--:|-| -| **Security incident with shared process detected** | The incident which started on {Start Time (UTC)} and recently detected on {Detected Time (UTC)} indicates that an attacker has {Action taken} your resource {Host} | - | High | -| **Security incident detected on multiple resources** | The incident which started on {Start Time (UTC)} and recently detected on {Detected Time (UTC)} indicates that similar attack methods were performed on your cloud resources {Host} | - | Medium | -| **Security incident detected from same source** | The incident which started on {Start Time (UTC)} and recently detected on {Detected Time (UTC)} indicates that an attacker has {Action taken} your resource {Host} | - | High | -| **Security incident detected on multiple machines** | The incident which started on {Start Time (UTC)} and recently detected on {Detected Time (UTC)} indicates that an attacker has {Action taken} your resources {Host} | - | Medium | --- <a name="intentions"></a> ## MITRE ATT&CK tactics Defender for Cloud's supported kill chain intents are based on [version 9 of the ## Deprecated Defender for Servers alerts -The following tables include the Defender for Servers security alerts [which have been deprecated in April, 2023 due to an improvment proccess](release-notes.md#deprecation-and-improvement-of-selected-alerts-for-windows-and-linux-servers). +The following tables include the Defender for Servers security alerts [which have been deprecated in April, 2023 due to an improvement process](release-notes.md#deprecation-and-improvement-of-selected-alerts-for-windows-and-linux-servers). ### Deprecated Linux alerts |
defender-for-cloud | Concept Agentless Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-containers.md | Container vulnerability assessment powered by MDVM (Microsoft Defender Vulnerabi - **Scanning OS packages** - container vulnerability assessment has the ability to scan vulnerabilities in packages installed by the OS package manager in Linux. See the [full list of the supported OS and their versions](support-agentless-containers-posture.md#registries-and-images). - **Language specific packages** ΓÇô support for language specific packages and files, and their dependencies installed or copied without the OS package manager. See the [complete list of supported languages](support-agentless-containers-posture.md#registries-and-images). - **Image scanning in Azure Private Link** - Azure container vulnerability assessment provides the ability to scan images in container registries that are accessible via Azure Private Links. This capability requires access to trusted services and authentication with the registry. Learn how to [connect privately to an Azure container registry using Azure Private Link](/azure/container-registry/container-registry-private-link#set-up-private-endpointportal-recommended). -- **Gaining intel for existing exploits of a vulnerability** - While vulnerability reporting tools can report the ever growing volume of vulnerabilities, the capacity to efficiently remediate them remains a challenge. These tools typically prioritize their remediation processes according to the severity of the vulnerability. MDVM provides additional context on the risk related with each vulnerability, leveraging intelligent assessment and risk-based prioritization against industry security benchmarks, based on three data sources: [exploit DB](https://www.exploit-db.com/), [CISA KEV](https://www.cisa.gov/known-exploited-vulnerabilities-catalog), and [MSRC](https://www.microsoft.com/msrc?SilentAuth=1&wa=wsignin1.0)+- **Exploitability information** - Each vulnerability report is searched through exploitability databases to assist our customers with determining actual risk associated with each reported vulnerability. - **Reporting** - Defender for Containers powered by Microsoft Defender Vulnerability Management (MDVM) reports the vulnerabilities as the following recommendation: | Recommendation | Description | |
defender-for-cloud | Defender For Apis Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-apis-deploy.md | |
defender-for-cloud | Devops Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/devops-faq.md | If you donΓÇÖt see SARIF file in the expected path, you may have chosen a differ ### I donΓÇÖt see the results for my ADO projects in Microsoft Defender for Cloud -Currently, OSS vulnerability findings are only available for GitHub repositories. +When you use classic pipeline configuration, make sure you don't change artifact name. This can result not seeing the results for your project. -Azure DevOps repositories will have the total exposed secrets, IaC misconfigurations, and code security findings available. It will show `N/A` for OSS vulnerabilities. You can learn more about how to [Review your findings](defender-for-devops-introduction.md). +Currently, OSS vulnerability findings are only available for GitHub repositories. Azure DevOps repositories will have the total exposed secrets, IaC misconfigurations, and code security findings available. It will show `N/A` for OSS vulnerabilities. You can learn more about how to [Review your findings](defender-for-devops-introduction.md). ### Why is my Azure DevOps repository not refreshing to healthy? |
defender-for-cloud | Incidents Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/incidents-reference.md | + + Title: Reference table for all incidents in Microsoft Defender for Cloud +description: This article lists the incidents visible in Microsoft Defender for Cloud + Last updated : 06/01/2023+++# Incidents - a reference guide ++> [!NOTE] +> For incidents that are in preview: [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)] ++This article lists the incidents you might get from Microsoft Defender for Cloud and any Microsoft Defender plans you've enabled. The incidents shown in your environment depend on the resources and services you're protecting, and your customized configuration. ++A [security incident](alerts-overview.md#what-are-security-incidents) is a correlation of alerts with an attack story that share an entity. For example, Resource, IP Address, User or share a [kill chain](alerts-reference.md#intentions) patterns. ++You can select an incident to view all of the alerts that are related to the incident and get more information. ++Learn how to [manage security incidents](incidents.md#managing-security-incidents). ++> [!NOTE] +> The same alert can exist as part of an incident, as well as to be visible as a standalone alert. ++## Security incident ++[Further details and notes](alerts-overview.md#what-are-security-incidents) ++| Alert | Description | Severity | +|--|--|--| +| **Security incident detected suspicious virtual machines activity** | This incident indicates suspicious activity on your virtual machines. Multiple alerts from different Defender for Cloud plans have been triggered revealing a similar pattern on your virtual machines. This might indicate a threat actor has gained unauthorized access to your environment and is attempting to compromise it. | Medium/High | +| **Security incident detected suspicious source IP activity** | This incident indicates that suspicious activity has been detected on the same source IP. Multiple alerts from different Defender for Cloud plans have been triggered on the same IP address, which increases the fidelity of malicious activity in your environment. Suspicious activity on the same IP address might indicate that an attacker has gained unauthorized access to your environment and is attempting to compromise it. | Medium/High | +| **Security incident detected on multiple resources** | This incident indicates that suspicious activity had been detected on your cloud resources. Multiple alerts from different Defender for Cloud plan have been triggered, revealing similar attack methods were performed on your cloud resources. This might indicate a threat actor has gained unauthorized access to your environment and is attempting to compromise it. | Medium/High | +| **Security incident detected suspicious user activity (Preview)** | This incident indicates suspicious user operations in your environment. Multiple alerts from different Defender for Cloud plans have been triggered by this user, which increases the fidelity of malicious activity in your environment. While this activity may be legitimate, a threat actor might utilize such operations to compromise resources in your environment. This might indicate that the account is compromised and is being used with malicious intent. | High | +| **Security incident detected suspicious service principal activity (Preview)** | This incident indicates suspicious service principal operations in your environment. Multiple alerts from different Defender for Cloud plans have been triggered by this service principal, which increases the fidelity of malicious activity in your environment. While this activity may be legitimate, a threat actor might utilize such operations to compromise resources in your environment. This might indicate that the service principal is compromised and is being used with malicious intent. | High | +| **Security incident detected suspicious crypto mining activity (Preview)** | Scenario 1: This incident indicates that suspicious crypto mining activity has been detected following suspicious user or service principal activity. Multiple alerts from different Defender for Cloud plans have been triggered on the same resource, which increases the fidelity of malicious activity in your environment. Suspicious account activity might indicate a threat actor gained unauthorized access to your environment, and the succeeding crypto mining activity may suggest that they successfully compromised your resource and are using it for mining cryptocurrencies, which can lead to increased costs for your organization. <br><br> Scenario 2: This incident indicates that suspicious crypto mining activity has been detected following a brute force attack on the same virtual machine resource. Multiple alerts from different Defender for Cloud plans have been triggered on the same resource, which increases the fidelity of malicious activity in your environment. The brute force attack on the virtual machine might indicate that a threat actor is attempting to gain unauthorized access to your environment, and the succeeding crypto mining activity may suggest they successfully compromised your resource and using it for mining cryptocurrencies, which can lead to increased costs for your organization. | High | +| **Security incident detected suspicious Key Vault activity (Preview)** | Scenario 1: This incident indicates that suspicious activity has been detected in your environment related to the usage of Key Vault. Multiple alerts from different Defender for Cloud plans have been triggered by this user or service principal, which increases the fidelity of malicious activity in your environment. Suspicious Key Vault activity might indicate that a threat actor is attempting to gain access to your sensitive data, such as keys, secrets, and certificates, and the account is compromised and is being used with malicious intent. <br><br> Scenario 2: This incident indicates that suspicious activity has been detected in your environment related to the usage of Key Vault. Multiple alerts from different Defender for Cloud plans have been triggered from the same IP address, which increases the fidelity of malicious activity in your environment. Suspicious Key Vault activity might indicate that a threat actor is attempting to gain access to your sensitive data, such as keys, secrets, and certificates, and the account is compromised and is being used with malicious intent. <br><br> Scenario 3: This incident indicates that suspicious activity has been detected in your environment related to the usage of Key Vault. Multiple alerts from different Defender for Cloud plans have been triggered on the same resource, which increases the fidelity of malicious activity in your environment. Suspicious Key Vault activity might indicate that a threat actor is attempting to gain access to your sensitive data, such as keys, secrets, and certificates, and the account is compromised and is being used with malicious intent. | High | +| **Security incident detected suspicious SAS activity (Preview)** | This incident indicates that suspicious activity has been detected following the potential misuse of a SAS token. Multiple alerts from different Defender for Cloud plans have been triggered on the same resource, which increases the fidelity of malicious activity in your environment. The usage of a SAS token can indicate that a threat actor has gained unauthorized access to your storage account and is attempting to access or exfiltrate sensitive data. | High | +| **Security incident detected anomalous geographical location activity (Preview)** | Scenario 1: This incident indicates that anomalous geographical location activity has been detected in your environment. Multiple alerts from different Defender for Cloud plans have been triggered on the same resource, which increases the fidelity of malicious activity in your environment. Suspicious activity originating from anomalous locations might indicate that a threat actor gained unauthorized access to your environment and is attempting to compromise it. <br><br> Scenario 2: This incident indicates that anomalous geographical location activity has been detected in your environment. Multiple alerts from different Defender for Cloud plans have been triggered from the same IP address, which increases the fidelity of malicious activity in your environment. Suspicious activity originating from anomalous locations might indicate that a threat actor gained unauthorized access to your environment and is attempting to compromise it. | High | +| **Security incident detected suspicious IP activity (Preview)** | Scenario 1: This incident indicates that suspicious activity has been detected originating from a suspicious IP address. Multiple alerts from different Defender for Cloud plans have been triggered from the same IP address, which increases the fidelity of malicious activity in your environment. Suspicious activity originating from a suspicious IP address might indicate that an attacker has gained unauthorized access to your environment and is attempting to compromise it. <br><br> Scenario 2: This incident indicates that suspicious activity has been detected originating from a suspicious IP address. Multiple alerts from different Defender for Cloud plans have been triggered on the same user or service principal, which increases the fidelity of malicious activity in your environment. Suspicious activity originating from a suspicious IP address can indicate that an attacker has gained unauthorized access to your environment and is attempting to compromise it. | High | +| **Security incident detected suspicious fileless attack activity (Preview)** | This incident indicates that a fileless attack toolkit has been detected on a virtual machine following a potential exploit attempt on the same resource. Multiple alerts from different Defender for Cloud plans have been triggered on the same virtual machine, which increases the fidelity of malicious activity in your environment. The presence of a fileless attack toolkit on the virtual machine might indicate that a threat actor has gained unauthorized access to your environment and is attempting to evade detection while carrying out further malicious activities. | High | +| **Security incident detected suspicious DDOS activity (Preview)** | This incident indicates that suspicious Distributed Denial of Service (DDOS) activity has been detected in your environment. DDOS attacks are designed to overwhelm your network or application with a high volume of traffic, causing it to become unavailable to legitimate users. Multiple alerts from different Defender for Cloud plans have been triggered on the same IP address, which increases the fidelity of malicious activity in your environment. | High | +| **Security incident detected suspicious data exfiltration activity (Preview)** | Scenario 1: This incident indicates that suspicious data exfiltration activity has been detected following suspicious user or service principal activity. Multiple alerts from different Defender for Cloud plans have been triggered on the same resource, which increases the fidelity of malicious activity in your environment. Suspicious account activity might indicate that a threat actor gained unauthorized access to your environment, and the succeeding data exfiltration activity may suggest that they are attempting to steal sensitive information. <br><br> Scenario 2: This incident indicates that suspicious data exfiltration activity has been detected following suspicious user or service principal activity. Multiple alerts from different Defender for Cloud plans have been triggered from the same IP address, which increases the fidelity of malicious activity in your environment. Suspicious account activity might indicate that a threat actor gained unauthorized access to your environment, and the succeeding data exfiltration activity may suggest that they are attempting to steal sensitive information. <br><br> Scenario 3: This incident indicates that suspicious data exfiltration activity has been detected following unusual password reset on a virtual machine. Multiple alerts from different Defender for Cloud plans have been triggered from the same IP address, which increases the fidelity of malicious activity in your environment. Suspicious account activity might indicate that a threat actor gained unauthorized access to your environment, and the succeeding data exfiltration activity may suggest that they are attempting to steal sensitive information. | High | +| **Security incident detected suspicious API activity (Preview)** | This incident indicates that suspicious API activity has been detected. Multiple alerts from Defender for Cloud have been triggered on the same resource, which increases the fidelity of malicious activity in your environment. Suspicious API usage might indicate that a threat actor is attempting to access sensitive information or execute unauthorized actions. | High | +| **Security incident detected suspicious Kubernetes cluster activity (Preview)** | This incident indicates that suspicious activity has been detected on your Kubernetes cluster following suspicious user activity. Multiple alerts from different Defender for Cloud plans have been triggered on the same cluster, which increases the fidelity of malicious activity in your environment. The suspicious activity on your Kubernetes cluster might indicate that a threat actor has gained unauthorized access to your environment and is attempting to compromise it. | High | +| **Security incident detected suspicious storage activity (Preview)** | Scenario 1: This incident indicates that suspicious storage activity has been detected following suspicious user or service principal activity. Multiple alerts from different Defender for Cloud plans have been triggered on the same resource, which increases the fidelity of malicious activity in your environment. Suspicious account activity might indicate that a threat actor gained unauthorized access to your environment, and the succeeding suspicious storage activity may suggest they are attempting to access potentially sensitive data. <br><br> Scenario 2: This incident indicates that suspicious storage activity has been detected following suspicious user or service principal activity. Multiple alerts from different Defender for Cloud plans have been triggered from the same IP address, which increases the fidelity of malicious activity in your environment. Suspicious account activity might indicate that a threat actor gained unauthorized access to your environment, and the succeeding suspicious storage activity may suggest they are attempting to access potentially sensitive data. | High | +| **Security incident detected suspicious Azure toolkit activity (Preview)** | This incident indicates that suspicious activity has been detected following the potential usage of an Azure toolkit. Multiple alerts from different Defender for Cloud plans have been triggered on the same user or service principal, which increases the fidelity of malicious activity in your environment. The usage of an Azure toolkit can indicate that an attacker has gained unauthorized access to your environment and is attempting to compromise it. | High | +| **Security incident detected compromised machine** | This incident indicates suspicious activity on one or more of your virtual machines. Multiple alerts from different Defender for Cloud plans have been triggered in chronological order on the same resource, following the MITRE ATT&CK framework. This might indicate a threat actor has gained unauthorized access to your environment and successfully compromised this machine.| Medium/High | +| **Security incident detected compromised machine with botnet communication** | This incident indicates suspicious botnet activity on your virtual machine. Multiple alerts from different Defender for Cloud plans have been triggered in chronological order on the same resource, following the MITRE ATT&CK framework. This might indicate a threat actor has gained unauthorized access to your environment and is attempting to compromise it. | Medium/High | +| **Security incident detected compromised machines with botnet communication** | This incident indicates suspicious botnet activity on your virtual machines. Multiple alerts from different Defender for Cloud plans have been triggered in chronological order on the same resource, following the MITRE ATT&CK framework. This might indicate a threat actor has gained unauthorized access to your environment and is attempting to compromise it. | Medium/High | +| **Security incident detected compromised machine with malicious outgoing activity** | This incident indicates suspicious outgoing activity on your virtual machine. Multiple alerts from different Defender for Cloud plans have been triggered in chronological order on the same resource, following the MITRE ATT&CK framework. This might indicate a threat actor has gained unauthorized access to your environment and is attempting to compromise it. | Medium/High | +| **Security incident detected compromised machines** | This incident indicates suspicious activity on one or more of your virtual machines. Multiple alerts from different Defender for Cloud plans have been triggered in chronological order on the same resources, following the MITRE ATT&CK framework. This might indicate a threat actor has gained unauthorized access to your environment and successfully compromised these machines. | Medium/High | +| **Security incident detected compromised machines with malicious outgoing activity** | This incident indicates suspicious outgoing activity from your virtual machines. Multiple alerts from different Defender for Cloud plans have been triggered in chronological order on the same resources, following the MITRE ATT&CK framework. This might indicate a threat actor has gained unauthorized access to your environment and is attempting to compromise it. | Medium/High | +| **Security incident detected on multiple machines** | This incident indicates suspicious activity on one or more of your virtual machines. Multiple alerts from different Defender for Cloud plans have been triggered in chronological order on the same resource, following the MITRE ATT&CK framework. This might indicate a threat actor has gained unauthorized access to your environment and is attempting to compromise it. | Medium/High | +| **Security incident with shared process detected** | Scenario 1: This incident indicates suspicious activity on your virtual machine. Multiple alerts from different Defender for Cloud plans have been triggered sharing the same process. This might indicate a threat actor has gained unauthorized access to your environment and is attempting to compromise it. <br><br> Scenario 2: This incident indicates suspicious activity on your virtual machines. Multiple alerts from different Defender for Cloud plans have been triggered sharing the same process. This might indicate a threat actor has gained unauthorized access to your environment and is attempting to compromise it. | Medium/High | ++## Next steps ++[Manage security incidents in Microsoft Defender for Cloud](incidents.md) |
defender-for-cloud | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md | Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 05/28/2023 Last updated : 06/01/2023 # What's new in Microsoft Defender for Cloud? Updates in May include: - [Release of containers Vulnerability Assessment powered by Microsoft Defender Vulnerability Management (MDVM) in Defender CSPM](#release-of-containers-vulnerability-assessment-powered-by-microsoft-defender-vulnerability-management-mdvm-in-defender-cspm) - [Renaming container recommendations powered by Qualys](#renaming-container-recommendations-powered-by-qualys) - [Defender for DevOps GitHub Application update](#defender-for-devops-github-application-update)+- [Defender for DevOps Pull Request annotations in Azure DevOps repositories now includes Infrastructure as Code misconfigurations](#defender-for-devops-pull-request-annotations-in-azure-devops-repositories-now-includes-infrastructure-as-code-misconfigurations) ### New alert in Defender for Key Vault If a subscription has a VA solution enabled on any of its VMs, no changes are ma Learn how to [Find vulnerabilities and collect software inventory with agentless scanning (Preview)](enable-vulnerability-assessment-agentless.md). +### Defender for DevOps Pull Request annotations in Azure DevOps repositories now includes Infrastructure as Code misconfigurations ++Defender for DevOps has expanded its Pull Request (PR) annotation coverage in Azure DevOps to include Infrastructure as Code (IaC) misconfigurations that are detected in ARM and Bicep templates. ++Developers can now see annotations for IaC misconfigurations directly in their PRs. Developers can also remediate critical security issues before the infrastructure is provisioned into cloud workloads. To simplify remediation, developers are provided with a severity level, misconfiguration description, and remediation instructions within each annotation. ++Previously, coverage for Defender for DevOps PR annotations in Azure DevOps only included secrets. ++Learn more about [Defender for DevOps](defender-for-devops-introduction.md) and [Pull Request annotations](enable-pull-request-annotations.md). + ## April 2023 Updates in April include: |
digital-twins | Concepts Data Ingress Egress | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-data-ingress-egress.md | You may want to send Azure Digital Twins data to other downstream services for s There are two main egress options in Azure Digital Twins. Digital twin data can be sent to most Azure services using *endpoints*. Or, if your destination is [Azure Data Explorer](/azure/data-explorer/data-explorer-overview), you can use *data history* to automatically send graph updates to an Azure Data Explorer cluster, where they are stored as historical data and can be queried as such. -In order for Azure Digital Twins to send data to other Azure services via endpoints or data history, the receiving service must have public network access enabled. Azure Digital Twins currently does not support any outbound communication to resources that have public network access disabled. +In order for Azure Digital Twins to send data to other Azure services via endpoints or data history, the receiving service must have either public network access enabled or the Trusted Microsoft Service option enabled. For data history, the data history connection must be configured with public network access enabled on the Event Hub and Azure Data Explorer instances. Once data history is configured, the Event Hub and Azure Data Explorer firewall and security settings will need to be configured manually. Once the connection is set up, Azure Digital Twins implements *at least once* delivery for data emitted to egress services. |
dns | Dns Private Resolver Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-overview.md | The following restrictions hold with respect to virtual networks: ### Subnet restrictions Subnets used for DNS resolver have the following limitations:-- The following IP address space is reserved and can't be used for the DNS resolver service: 10.0.1.0 - 10.0.16.255. - - Do not use these class C networks or subnets within these networks for DNS resolver subnets: 10.0.1.0/24, 10.0.2.0/24, 10.0.3.0/24, 10.0.4.0/24, 10.0.5.0/24, 10.0.6.0/24, 10.0.7.0/24, 10.0.8.0/24, 10.0.9.0/24, 10.0.10.0/24, 10.0.11.0/24, 10.0.12.0/24, 10.0.13.0/24, 10.0.14.0/24, 10.0.15.0/24, 10.0.16.0/24. - A subnet must be a minimum of /28 address space or a maximum of /24 address space. - A subnet can't be shared between multiple DNS resolver endpoints. A single subnet can only be used by a single DNS resolver endpoint. - All IP configurations for a DNS resolver inbound endpoint must reference the same subnet. Spanning multiple subnets in the IP configuration for a single DNS resolver inbound endpoint isn't allowed. |
event-hubs | Event Hubs Geo Dr | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-geo-dr.md | Title: Geo-disaster recovery - Azure Event Hubs| Microsoft Docs description: How to use geographical regions to fail over and perform disaster recovery in Azure Event Hubs Previously updated : 05/10/2022 Last updated : 06/01/2023 # Azure Event Hubs - Geo-disaster recovery Resilience against disastrous outages of data processing resources is a requirement for many enterprises and in some cases even required by industry regulations. -Azure Event Hubs already spreads the risk of catastrophic failures of individual machines or even complete racks across clusters that span multiple failure domains within a datacenter and it implements transparent failure detection and failover mechanisms such that the service will continue to operate within the assured service-levels and typically without noticeable interruptions in the event of such failures. If an Event Hubs namespace has been created with the enabled option for [availability zones](../availability-zones/az-overview.md), the outage risk is further spread across three physically separated facilities, and the service has enough capacity reserves to instantly cope with the complete, catastrophic loss of the entire facility. +Azure Event Hubs already spreads the risk of catastrophic failures of individual machines or even complete racks across clusters that span multiple failure domains within a datacenter. It implements transparent failure detection and failover mechanisms such that the service will continue to operate within the assured service-levels and typically without noticeable interruptions in the event of such failures. If you create an Event Hubs namespace with [availability zones](../availability-zones/az-overview.md) enabled, you reduce the risk of outage further and enable high availablity. With availability zones, the outage risk is further spread across three physically separated facilities, and the service has enough capacity reserves to instantly cope with the complete, catastrophic loss of the entire facility. The all-active Azure Event Hubs cluster model with availability zone support provides resiliency against grave hardware failures and even catastrophic loss of entire datacenter facilities. Still, there might be grave situations with widespread physical destruction that even those measures cannot sufficiently defend against. |
event-hubs | Log Compaction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/log-compaction.md | Log compaction feature of Event Hubs provides the following guarantee: ## Log compaction use cases Log compaction can be useful in scenarios where you stream the same set of updatable events. As compacted event hubs only keep the latest events, users don't need to worry about the growth of the event storage. Therefore log compaction is commonly used in scenarios such as Change Data Capture(CDC), maintaining event in tables for stream processing applications and event caching. +## Quotas and limits +| Limit | Basic | Standard | Premium | Dedicated | +| -- | -- | -- | -- | | +| Size of compacted event hub | N/A | 1 GB per partition | 250 GB per partition | 250 GB per partition | ++For other quotas and limits, see [Event Hubs quotas and limits](event-hubs-quotas.md). ## Next steps For instructions on how to use log compaction in Event Hubs, see [Use log compaction](./use-log-compaction.md) |
event-hubs | Use Log Compaction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/use-log-compaction.md | With Kafka you can set the partition key when you create the `ProducerRecord` as ProducerRecord<String, String> record = new ProducerRecord<String, String>(TOPIC, "Key-1" , "Value-1"); ``` +## Quotas and limits +| Limit | Basic | Standard | Premium | Dedicated | +| -- | -- | -- | -- | | +| Size of compacted event hub | N/A | 1 GB per partition | 250 GB per partition | 250 GB per partition | ++For other quotas and limits, see [Event Hubs quotas and limits](event-hubs-quotas.md). ## Consuming events from a compacted topic There are no changes required at the consumer side to consume events from a compacted event hub. So, you can use any of the existing consumer applications to consume data from a compacted event hub. |
firewall | Ftp Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/ftp-support.md | However, you can enable Active FTP when you deploy using Azure PowerShell, the A The following table shows the configuration required to support various FTP scenarios: +> [!TIP] +> Remember that it may also be necessary to configure firewall rules on the client side to support the connection. |Firewall Scenario |Active FTP mode |Passive FTP mode | |||| |VNet-VNet |Network Rules to configure:<br>- Allow From Source VNet to Dest IP port 21<br>- Allow From Dest IP port 20 to Source VNet |Network Rules to configure:<br>- Allow From Source VNet to Dest IP port 21<br>- Allow From Source VNet to Dest IP \<Range of Data Ports>| |Outbound VNet - Internet<br><br>(FTP client in VNet, server on Internet) |Not supported *|**Pre-Condition**: Configure FTP server to accept data and control channels from different source IP addresses. Alternatively, configure Azure Firewall with single Public IP address.<br><br>Network Rules to configure:<br>- Allow From Source VNet to Dest IP port 21<br>- Allow From Source VNet to Dest IP \<Range of Data Ports> |-|Inbound DNAT<br><br>(FTP client on Internet, server in VNet) |DNAT rule to configure:<br>- DNAT From Internet Source to VNet IP port 21<br><br>Network rule to configure:<br>- Allow From VNet IP port 20 to Internet Source |**Pre-Condition**:<br>Configure FTP server to accept data and control channels from different source IP addresses.<br><br>Tip: Azure Firewall supports limited number of DNAT rules. It is important to configure the FTP server to use a small port range on the Data channel.<br><br>DNAT Rules to configure:<br>- DNAT From Internet Source to VNet IP port 21<br>- DNAT From Internet Source to VNet IP \<Range of Data Ports> | +|Inbound DNAT<br><br>(FTP client on Internet, server in VNet) |DNAT rule to configure:<br>- DNAT From Internet Source to VNet IP port 21<br><br>Network rule to configure:<br>- Allow **from** FTP server VNet IP **to** client Internet destination IP at destination client configured active ftp client port ranges |**Pre-Condition**:<br>Configure FTP server to accept data and control channels from different source IP addresses.<br><br>Tip: Azure Firewall supports limited number of DNAT rules. It is important to configure the FTP server to use a small port range on the Data channel.<br><br>DNAT Rules to configure:<br>- DNAT From Internet Source to VNet IP port 21<br>- DNAT From Internet Source to VNet IP \<Range of Data Ports> | \* Active FTP will not work when the FTP client must reach an FTP server on the Internet. Active FTP uses a PORT command from the FTP client that tells the FTP server what IP address and port to use for the data channel. This PORT command uses the private IP address of the client which cannot be changed. Client-side traffic traversing the Azure Firewall will be NATed for Internet-based communications, so the PORT command is seen as invalid by the FTP server. This is a general limitation of Active FTP when used with a client-side NAT. |
frontdoor | Understanding Pricing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/understanding-pricing.md | + + Title: Compare pricing between Azure Front Door tiers +description: This article describes the billing model for Azure Front Door and compares the pricing for the Standard, Premium and (classic) tiers. ++++ Last updated : 05/30/2023++++# Compare pricing between Azure Front Door tiers ++> [!NOTE] +> Prices shown in this article are examples and are for illustration purposes only. For pricing information according to your region, see the [Pricing page](https://azure.microsoft.com/pricing/details/frontdoor/) ++Azure Front Door has three tiers: Standard, Premium, and (classic). This article describes the billing model for Azure Front Door and compares the pricing for the Standard, Premium and (classic) tiers. When migrating from Azure Front Door (classic) to Standard or Premium, we recommend you do a cost analysis to understand the pricing differences between the tiers. We show you how to evaluate cost that you can apply your environment. ++## Pricing model comparison ++| Pricing dimensions | Standard | Premium | Classic | +|--|--|--|--| +| Base fees (per month) | $35 | $330 | N/A | +| Outbound data transfer from edge location to client (per GB) | Varies by 8 zones | Same as standard | - Varies by 5 zones </br>- Higher unit rates when compared to Standard/Premium | +| Outbound data transfer from edge to the origin (per GB) | Varies by 8 zones | Same as standard | Free | +| Incoming requests from client to Front Door’s edge location (per 10,000 requests) | Varies by 8 zones | - Varies by 8 zones </br>- Higher unit rate than Standard | Free | +| First 5 routing rules (per hour) | Free | Free | $0.03 | +| Per additional routing rule (per hour) | Free | Free | $0.012 | +| Inbound data transfer (per GB) | Free | Free | $0.01 | +| Web Application Firewall custom rules | Free | Free | - $5/month/policy </br>- $1/month & $0.06 per million requests </br></br>For more information, see [Azure Web Application Firewall pricing](https://azure.microsoft.com/pricing/details/web-application-firewall/) | +| Web Application Firewall managed rules | Free | Free | - $5/month/policy </br>- $20/month +$1/million requests </br></br>For more information, see [Azure Web Application Firewall pricing](https://azure.microsoft.com/pricing/details/web-application-firewall/) | +| Data transfer from an origin in Azure data center to Front Door's edge location | Free | Free | See [Bandwidth pricing](https://azure.microsoft.com/pricing/details/bandwidth/) | +| Private link to origin | Not supported | Free | Not supported | +| First 100 custom domains per month | Free | Free | Free | +| Per additional custom domain (per month) | Free | Free | $5 | ++## Cost assessment ++> [!NOTE] +> Azure Front Door Standard and Premium has a lower total cost of ownership than Azure Front Door (classic). If you have a request heavy workload, it's recommended to estimate the impact of the request meter of the new tiers. If you have multiple instance of Azure Front Door, it's recommended to estimate the impact of the base fee of the new tiers. ++The following are general guidance for getting the right metrics to estimate the cost of the new tiers. ++1. Pull the invoice for the Azure Front Door (classic) profile to get the monthly charges. ++1. Compute the Azure Front Door Standard/Premium pricing using the following table: ++ | Azure Front Door Standard/Premium meter | How to calculate from Azure Front Door (classic) metrics | + |--|--| + | Base fee | - If you need managed WAF rules, bot protection, or Private Link: **$330/month** </br> - If you only need custom WAF rules: **$35/month** | + | Requests | **For Standard:** </br>1. Go to your Azure Front Door (classic) profile, select **Metrics** from under *Monitor* in the left side menu pane. </br>2. Select the **Request Count** from the *Metrics* drop-down menu. </br> 3. To view regional metrics, you can apply a split to the data by selecting **Client Country** or **Client Region**. </br> 4. If you select *Client Country*, you need to map them to the corresponding Azure Front Door pricing zone. </br> :::image type="content" source="./media/understanding-pricing/request-count.png" alt-text="Screenshot of the request count metric for Front Door (classic)." lightbox="./media/understanding-pricing/request-count.png"::: </br> **For Premium:** </br>You can look at the **Request Count** and the **WAF Request Count** metric in the Azure Front Door (classic) profile. </br> :::image type="content" source="./media/understanding-pricing/waf-request-count.png" alt-text="Screenshot of the Web Application Firewall request count metric for Front Door (classic)." lightbox="./media/understanding-pricing/waf-request-count.png"::: | + | Egress from Azure Front Door edge to client | You can obtain this data from your Azure Front Door (classic) invoice or from the **Billable Response Size** metric in the Azure Front Door (classic) profile. To get a more accurate estimation, apply split by *Client Count* or *Client Region*.</br> :::image type="content" source="./media/understanding-pricing/billable-response-size.png" alt-text="Screenshot of the billable response size metric for Front Door (classic)." lightbox="./media/understanding-pricing/billable-response-size.png"::: | + | Ingress from Azure Front Door edge to origin | You can obtain this data from your Azure Front Door (classic) invoice. Refer to the quantities for Data transfer from client to edge location as an estimation. | ++1. Go to the [pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=frontdoor-standard-premium). ++1. Select the appropriate Azure Front Door tier and zone. ++1. Calculate the total cost for the Azure Front Door Standard/Premium profile from the metrics you obtained in the previous step. ++## Example scenarios ++Azure Front Door Standard/Premium cost less than Azure Front Door (classic) in the first three scenarios. However, in scenario 4 and 5 there are situations where Azure Front Door Standard/Premium can incur higher charges than Azure Front Door (classic). In these scenarios, you can use the cost assessment to estimate the cost of the new tiers. ++### Scenario 1: A static website with custom WAF rules ++* 10 routing rules are configured. +* 20 TB of outbound data transfer from Azure Front Door edge to client. +* 200 million requests from client to Azure Front Door edge. (Including 100 million custom WAF requests and 10 custom rules). +* Traffic mostly originates from North America and Europe. ++| Cost dimensions | Azure Front Door (classic) | Azure Front Door Standard | +|--|--|--| +| Base fee | $0 | $35 | +| Routing rules | $43.80 | $0 | +| WAF policy and rule sets | $75 = $15 (WAF policy + custom WAF rules) + $60 (requests)| $0 | +| Requests | $0 | $300 | +| Egress from Azure Front Door edge to client | $3,200 | $1,475 | +| Ingress from Azure Front Door edge to origin | $0 | $0 | +| Total | ~$3,319 | $1,810 | ++Azure Front Door Standard is ~45% cheaper than Azure Front Door (classic) for static websites with custom WAF rules because of the lower egress cost and the free routing rules. ++### Scenario 2: A static website with managed WAF rules ++* 30 routing rules are configured. +* 20 TB of outbound data transfer from Azure Front Door edge to client. +* 200 million requests from client to Azure Front Door edge (Including 100 million managed WAF requests). +* Traffic mostly originates from Asia Pacific (including Japan). ++| Cost dimensions | Azure Front Door (classic) | Azure Front Door Premium | +|--|--|--| +| Base fee | $0 | $330 | +| Routing rules | $219 | $0 | +| WAF policy and rule sets | $220 = $20 (managed WAF rules and ruleset) + $200 (requests) | $0 | +| Requests | $0 | $336 | +| Egress from Azure Front Door edge to client | $4,700 | $2,125 | +| Ingress from Azure Front Door edge to origin | $0 | $0 | +| Total | ~$5,139 | $2,791 | ++Azure Front Door Premium is ~45% cheaper than Azure Front Door (classic) for static websites with managed WAF rules because of the lower egress cost and the free routing rules. ++### Scenario 3: File downloads ++* Two routing rules are configured. +* 150 TB of outbound data transfer from Azure Front Door edge to client. +* 1.5 million requests from client to Azure Front Door edge. +* Traffic mostly originates from India. ++| Cost dimensions | Azure Front Door (classic) | Azure Front Door Standard | +|--|--|--| +| Base fee | $0 | $35 | +| Routing rules | $0 | $0 | +| WAF policy and rule sets | $0 | $0 | +| Requests | $0 | $1.62 | +| Egress from Azure Front Door edge to client | $39,500 | $12,690 | +| Ingress from Azure Front Door edge to origin | $0 | $0 | +| Total | ~$39,500 | $12,727 | ++Azure Front Door Standard is ~68% cheaper than Azure Front Door (classic) for file downloads because of the lower egress cost. ++### Scenario 4: Request heavy scenario with WAF protection ++* A dynamic E-commerce website with 150 routing rules is configured. +* 20 TB of outbound data transfer from Azure Front Door edge to client. +* 5 billion requests with 10 TB of ingress. +* 2.4 billion WAF requests (1.2 billion managed WAF rule requests and 1.2 billion custom WAF rule requests). ++| Cost dimensions | Azure Front Door (classic) | Azure Front Door Premium | +|--|--|--| +| Base fee | $0 | $330 | +| Routing rules | $1,314 | $0 | +| WAF policy and rule sets | $1840 = $20 (WAF policy) + $1820 (requests) | $0 | +| Requests | $0 | $6,748 | +| Egress from Azure Front Door edge to client | $3,200 | $1,475| +| Ingress from Azure Front Door edge to origin | $100 | $200 | +| Total | $6,454 | $8,753 | ++In this comparison, Azure Front Door Premium is ~35% more expensive than Azure Front Door (classic) because of the higher request cost and the base fee. If the cost increase is significant, reach out to your Microsoft sales representative to discuss options. +++### Scenario 5: Social media application with multiple Front Door (classic) profiles with WAF protection ++* The application is designed in a micro-services architecture with static and dynamic traffic. Each micro service component is deployed in a separate Azure Front Door (classic) profile. In total, there are 80 Azure Front Door (classic) profiles (30 dev/test, 50 production). +* In each profile, there are 10 routing rules configured to route traffic to different backends based on the path. +* There are two WAF policies with two rule sets to protect the application from top CVE attacks. +* 50 million requests per month. +* 50 TB of outbound data transfer from Azure Front Door edge to client. (20 million requests get blocked by WAF). +* Traffic mostly originates from North America. ++| Cost dimensions | Azure Front Door (classic) | Azure Front Door Premium | +|--|--|--| +| Base fee | $0 | $26,400 = $330 x 80 profiles | +| Routing rules | $7,008 | $0 | +| WAF policy and rule sets | $60 = $40 (WAF policy) + $20 (requests) | $0 | +| Requests | $0 | $75 | +| Egress from Azure Front Door edge to client | $7,700 | $3,425| +| Ingress from Azure Front Door edge to origin | $2 | $4 | +| Total | $14,770 | $29,904 | ++In this comparison, Azure Front Door Premium is more than twice as expensive than Azure Front Door (classic) because of the higher base fee. ++#### Suggestion to reduce cost ++* Check if all 80 instances of Azure Front Door (classic) are required. Remove unnecessary resources, such as temporary testing environments. +* Migrate your most important Front Door (classic) profiles to Azure Front Door Standard/Premium based on the necessity of the features. +* If you have multiple Front Door (classic) profiles, consider consolidating them into a single Azure Front Door Standard/Premium profile. This change can reduce the base fee and the routing rule cost. The capability to consolidate multiple Azure Front Door (classic) profiles into a single Azure Front Door Standard/Premium profile will be available soon. ++## Next steps ++* Learn about how [settings are mapped](tier-mapping.md) from Azure Front Door (classic) to Azure Front Door Standard/Premium. +* Learn about [Azure Front Door (classic) tier migration](tier-migration.md). +* Learn how to [migrate from Azure Front Door (classic) to Azure Front Door Standard/Premium](migrate-tier.md). |
healthcare-apis | Overview Of Device Mapping | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview-of-device-mapping.md | Title: Overview the MedTech service device mapping - Azure Health Data Services + Title: Overview of the MedTech service device mapping - Azure Health Data Services description: Learn about the MedTech service device mapping. Previously updated : 04/24/2023 Last updated : 05/31/2023 |
iot-edge | Troubleshoot Common Errors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot-common-errors.md | When migrating to the new IoT hub (assuming not using DPS), follow these steps i 1. Restart the top-level parent Edge device first, make sure it's up and running 1. Restart each device in hierarchy level by level from top to the bottom +### IoT Edge has low message throughput when geographically distant from IoT Hub ++#### Symptoms ++Azure IoT Edge devices that are geographically distant from Azure IoT Hub have a lower than expected message throughput. ++#### Cause ++High latency between the device and IoT Hub can cause a lower than expected message throughput. IoT Edge uses a default message batch size of 10. This limits the number of messages that are sent in a single batch, which increases the number of round trips between the device and IoT Hub. ++#### Solution ++Try increasing the IoT Edge Hub **MaxUpstreamBatchSize** environment variable. This allows more messages to be sent in a single batch, which reduces the number of round trips between the device and IoT Hub. ++To set Azure Edge Hub environment variables in the Azure portal: ++1. Navigate to your IoT Hub and select **Devices** under the **Device management** menu. +1. Select the IoT Edge device that you want to update. +1. Select **Set Modules**. +1. Select **Runtime Settings**. +1. In the **Edge Hub** module settings tab, add the **MaxUpstreamBatchSize** environment variable as type **Number** with a value of **20**. +1. Select **Apply**. + ## Next steps Do you think that you found a bug in the IoT Edge platform? [Submit an issue](https://github.com/Azure/iotedge/issues) so that we can continue to improve. |
iot-hub-device-update | Device Update Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-limits.md | If you plan to deploy large-file packages, with file size larger than 100 MB, it The Device Update for IoT Hub service utilizes Content Delivery Networks (CDNs) that work optimally with range requests of 1 MB in size. Range requests larger than 100 MB are not supported. +## Throttling limits ++The following table shows the enforced throttles for operations that are available in all Device Update for IoT Hub tiers. Values refer to an individual Device Update instance. ++|Device Update service API | Throttling Rate | +|-|| +|GetGroups |30/min| +|GetGroupDetails| 30/min| +|GetBestUpdates per group| 30/min| +|GetUpdateCompliance per group| 30/min| +|GetAllUpdateCompliance |30/min| +|GetSubgroupUpdateCompliance| 30/min| +|GetSubgroupBestUpdates| 30/min| +|CreateOrUpdateDeployment| 7/min | +|DeleteDeployment| 7/min | +|ProcessSubgroupDeployment | 7/min| ++ ## Next steps - [Create a Device Update for IoT Hub account](create-device-update-account.md) |
load-balancer | Distribution Mode Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/distribution-mode-concepts.md | Azure Load Balancer supports the following distribution modes for routing connec | Distribution mode | Hash based | Session persistence: Client IP | Session persistence: Client IP and protocol | | | | | | | Overview | Traffic from the same client IP routed to any healthy instance in the backend pool | Traffic from the same client IP is routed to the same backend instance | Traffic from the same client IP and protocol is routed to the same backend instance |-| Tuples | 5 tuple | 2 tuple | 3 tuple | +| Tuples | five-tuple | two-tuple | three-tuple | | Azure portal configuration | Session persistence: **None** | Session persistence: **Client IP** | Session persistence: **Client IP and protocol** | | [REST API](/rest/api/load-balancer/load-balancers/create-or-update#loaddistribution) | ```"loadDistribution":"Default"```| ```"loadDistribution":SourceIP``` | ```"loadDistribution":SourceIPProtocol``` | There's no downtime when switching from one distribution mode to another on a lo ## Hash based -Azure Load Balancer uses a five tuple hash based distribution mode by default. +Azure Load Balancer uses a five-tuple hash based distribution mode by default. -The five tuple consists of: +The five-tuple consists of: * **Source IP** * **Source port** * **Destination IP** In order to configure hash based distribution, you must select session persisten  -*Figure: Default 5 tuple hash based distribution* +*Figure: Default five-tuple hash based distribution* ## Session persistence -Session persistence is also known session affinity, source IP affinity, or client IP affinity. This distribution mode uses a two-tuple (source IP and destination IP) or three-tuple (source IP, destination IP, and protocol type) hash to route to backend instances. When using session persistence, connections from the same client will go to the same backend instance within the backend pool. +Session persistence is also known session affinity, source IP affinity, or client IP affinity. This distribution mode uses a two-tuple (source IP and destination IP) or three-tuple (source IP, destination IP, and protocol type) hash to route to backend instances. When using session persistence, connections from the same client go to the same backend instance within the backend pool. Session persistence mode has two configuration types: -* **Client IP (2-tuple)** - Specifies that successive requests from the same client IP address will be handled by the same backend instance. -* **Client IP and protocol (3-tuple)** - Specifies that successive requests from the same client IP address and protocol combination will be handled by the same backend instance. +* **Client IP (2-tuple)** - Specifies that successive requests from the same client IP address are handled by the same backend instance. +* **Client IP and protocol (3-tuple)** - Specifies that successive requests from the same client IP address and protocol combination are handled by the same backend instance. -The following figure illustrates a two-tuple configuration. Notice how the two-tuple runs through the load balancer to virtual machine 1 (VM1). VM1 is then backed up by VM2 and VM3. +The following figure illustrates a two-tuple configuration. Notice how the two-tuple runs through the load balancer to virtual machine 1 (VM1). VM1 is backed up by VM2 and VM3.  The following figure illustrates a two-tuple configuration. Notice how the two-t ## Use cases -Source IP affinity with client IP and protocol (source IP affinity 3-tuple), solves an incompatibility between Azure Load Balancer and Remote Desktop Gateway (RD Gateway). +Source IP affinity with client IP and protocol (source IP affinity three-tuple), solves an incompatibility between Azure Load Balancer and Remote Desktop Gateway (RD Gateway). Another use case scenario is media upload. The data upload happens through UDP, but the control plane is achieved through TCP: |
load-balancer | Load Balancer Troubleshoot Health Probe Status | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-troubleshoot-health-probe-status.md | Title: Troubleshoot Azure Load Balancer health probe status description: Learn how to troubleshoot known issues with Azure Load Balancer health probe status. -+ Previously updated : 12/02/2020 Last updated : 05/31/2023 -+ # Troubleshoot Azure Load Balancer health probe status This page provides troubleshooting information common Azure Load Balancer health probe questions. -## Symptom: VMs behind the Load Balancer are not responding to health probes +## Symptom: VMs behind the Load Balancer aren't responding to health probes For the backend servers to participate in the load balancer set, they must pass the probe check. For more information about health probes, see [Understanding Load Balancer Probes](load-balancer-custom-probe-overview.md).  The Load Balancer backend pool VMs may not be responding to the probes due to any of the following reasons: The Load Balancer backend pool VMs may not be responding to the probes due to an **Validation and resolution** -To resolve this issue, log in to the participating VMs, and check if the VM state is healthy, and can respond to **PsPing** or **TCPing** from another VM in the pool. If the VM is unhealthy, or is unable to respond to the probe, you must rectify the issue and get the VM back to a healthy state before it can participate in load balancing. +To resolve this issue, sign-in to the participating VMs, and check if the VM state is healthy, and can respond to **PsPing** or **TCPing** from another VM in the pool. If the VM is unhealthy, or is unable to respond to the probe, you must rectify the issue and get the VM back to a healthy state before it can participate in load balancing. -### Cause 2: Load Balancer backend pool VM is not listening on the probe port -If the VM is healthy, but is not responding to the probe, then one possible reason could be that the probe port is not open on the participating VM, or the VM is not listening on that port. +### Cause 2: Load Balancer backend pool VM isn't listening on the probe port +If the VM is healthy, but isn't responding to the probe, then one possible reason could be that the probe port isn't open on the participating VM, or the VM isn't listening on that port. **Validation and resolution** -1. Log in to the backend VM. -2. Open a command prompt and run the following command to validate there is an application listening on the probe port: +1. Sign-in to the backend VM. +2. Open a command prompt and run the following command to validate there's an application listening on the probe port: netstat -an-3. If the port state is not listed as **LISTENING**, configure the proper port. -4. Alternatively, select another port, that is listed as **LISTENING**, and update load balancer configuration accordingly. +3. If the port state isn't listed as **LISTENING**, configure the proper port. +4. Alternatively, select another port that is listed as LISTENING and update load balancer configuration accordingly. ### Cause 3: Firewall, or a network security group is blocking the port on the load balancer backend pool VMs-If the firewall on the VM is blocking the probe port, or one or more network security groups configured on the subnet or on the VM, is not allowing the probe to reach the port, the VM is unable to respond to the health probe. +If the firewall on the VM is blocking the probe port, or one or more network security groups configured on the subnet or on the VM, isn't allowing the probe to reach the port, the VM is unable to respond to the health probe. **Validation and resolution** -1. If the firewall is enabled, check if it is configured to allow the probe port. If not, configure the firewall to allow traffic on the probe port, and test again. -- Check to make sure your VM firewall is not blocking probe traffic originating from IP address `168.63.129.16`+1. If the firewall is enabled, check if it's configured to allow the probe port. If not, configure the firewall to allow traffic on the probe port, and test again. +- Check to make sure your VM firewall isn't blocking probe traffic originating from IP address `168.63.129.16` - You can check listening ports by running `netstat -a` from a Windows command prompt or `netstat -l` from a Linux terminal - You can query your firewall profiles to check whether your policies are blocking incoming traffic by running `netsh advfirewall show allprofiles | more` from a Windows commands prompt or `sudo iptables -L` from a Linux terminal to see all configured firewall rules. - More details on troubleshooting firewall issues for Azure VMs, see [Azure VM Guest OS firewall is blocking inbound traffic](/troubleshoot/azure/virtual-machines/guest-os-firewall-blocking-inbound-traffic). If the firewall on the VM is blocking the probe port, or one or more network sec 5. Test if the VM has now started responding to the health probes. ### Cause 4: Other misconfigurations in Load Balancer-If all the preceding causes seem to be validated and resolved correctly, and the backend VM still does not respond to the health probe, then manually test for connectivity, and collect some traces to understand the connectivity. +If all the preceding causes seem to be validated and resolved correctly, and the backend VM still doesn't respond to the health probe, then manually test for connectivity, and collect some traces to understand the connectivity. **Validation and resolution** If all the preceding causes seem to be validated and resolved correctly, and the 3. If no response is received in these ping tests, then - Run a simultaneous Netsh trace on the target backend pool VM and another test VM from the same VNet. Now, run a PsPing test for some time, collect some network traces, and then stop the test. - Analyze the network capture and see if there are both incoming and outgoing packets related to the ping query. - - If no incoming packets are observed on the backend pool VM, there is potentially a network security groups or UDR mis-configuration blocking the traffic. + - If no incoming packets are observed on the backend pool VM, there's potentially a network security groups or UDR mis-configuration blocking the traffic. - If no outgoing packets are observed on the backend pool VM, the VM needs to be checked for any unrelated issues (for example, Application blocking the probe port). - Verify if the probe packets are being forced to another destination (possibly via UDR settings) before reaching the load balancer. This can cause the traffic to never reach the backend VM. 4. Change the probe type (for example, HTTP to TCP), and configure the corresponding port in network security groups ACLs and firewall to validate if the issue is with the configuration of probe response. For more information about health probe configuration, see [Endpoint Load Balancing health probe configuration](/archive/blogs/mast/endpoint-load-balancing-heath-probe-configuration-details). ## Next steps -If the preceding steps do not resolve the issue, open a [support ticket](https://azure.microsoft.com/support/options/). +If the preceding steps don't resolve the issue, open a [support ticket](https://azure.microsoft.com/support/options/). |
load-balancer | Quickstart Load Balancer Standard Internal Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-powershell.md | description: This quickstart shows how to create an internal load balancer using Previously updated : 09/02/2022 Last updated : 05/31/2023 #Customer intent: I want to create a load balancer so that I can load balance internal traffic to VMs. Use [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) ## Create public IP address for NAT gateway and place IP in variable ## $gwpublicip = @{ Name = 'myNATgatewayIP'- ResourceGroupName = 'CreatePubLBQS-rg' + ResourceGroupName = 'CreateIntLBQS-rg' Location = 'eastus' Sku = 'Standard' AllocationMethod = 'static' To create a zonal public IP address in zone 1, use the following command: ## Create a zonal public IP address for NAT gateway and place IP in variable ## $gwpublicip = @{ Name = 'myNATgatewayIP'- ResourceGroupName = 'CreatePubLBQS-rg' + ResourceGroupName = 'CreateIntLBQS-rg' Location = 'eastus' Sku = 'Standard' AllocationMethod = 'static' $gwpublicip = @{ $gwpublicip = New-AzPublicIpAddress @gwpublicip ```+> [!NOTE] +> The public IP address is used by the NAT gateway to provide outbound connectivity for the virtual machines in the backend pool. This is recommended when you create an internal load balancer and need the backend pool resources to have outbound connectivity. For more information, see [NAT gateway](load-balancer-outbound-connections.md). ### Create virtual network, network security group, bastion host, and NAT gateway $gwpublicip = New-AzPublicIpAddress @gwpublicip ## Create NAT gateway resource ## $nat = @{- ResourceGroupName = 'CreatePubLBQS-rg' + ResourceGroupName = 'CreateIntLBQS-rg' Name = 'myNATgateway' IdleTimeoutInMinutes = '10' Sku = 'Standard' New-AzLoadBalancer @loadbalancer ## Create virtual machines -In this section, you'll create the two virtual machines for the backend pool of the load balancer. +In this section, you create the two virtual machines for the backend pool of the load balancer. * Create three network interfaces with [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface) $nsg = Get-AzNetworkSecurityGroup ## For loop with variable to create virtual machines for load balancer backend pool. ## for ($i=1; $i -le 2; $i++) {-## Command to create network interface for VMs ## -$nic = @{ + ## Command to create network interface for VMs ## + $nic = @{ Name = "myNicVM$i" ResourceGroupName = 'CreateIntLBQS-rg' Location = 'eastus' Subnet = $vnet.Subnets[0] NetworkSecurityGroup = $nsg LoadBalancerBackendAddressPool = $bepool-} -$nicVM = New-AzNetworkInterface @nic --## Create a virtual machine configuration for VMs ## -$vmsz = @{ - VMName = "myVM$i" - VMSize = 'Standard_DS1_v2' -} -$vmos = @{ - ComputerName = "myVM$i" - Credential = $cred -} -$vmimage = @{ - PublisherName = 'MicrosoftWindowsServer' - Offer = 'WindowsServer' - Skus = '2019-Datacenter' - Version = 'latest' -} -$vmConfig = New-AzVMConfig @vmsz ` - | Set-AzVMOperatingSystem @vmos -Windows ` - | Set-AzVMSourceImage @vmimage ` - | Add-AzVMNetworkInterface -Id $nicVM.Id --## Create the virtual machine for VMs ## -$vm = @{ - ResourceGroupName = 'CreateIntLBQS-rg' - Location = 'eastus' - VM = $vmConfig - Zone = "$i" -} -New-AzVM @vm -AsJob -} + } + $nicVM = New-AzNetworkInterface @nic ++ ## Create a virtual machine configuration for VMs ## + $vmsz = @{ + VMName = "myVM$i" + VMSize = 'Standard_DS1_v2' + } + $vmos = @{ + ComputerName = "myVM$i" + Credential = $cred + } + $vmimage = @{ + PublisherName = 'MicrosoftWindowsServer' + Offer = 'WindowsServer' + Skus = '2019-Datacenter' + Version = 'latest' + } + $vmConfig = New-AzVMConfig @vmsz ` + | Set-AzVMOperatingSystem @vmos -Windows ` + | Set-AzVMSourceImage @vmimage ` + | Add-AzVMNetworkInterface -Id $nicVM.Id ++ ## Create the virtual machine for VMs ## + $vm = @{ + ResourceGroupName = 'CreateIntLBQS-rg' + Location = 'eastus' + VM = $vmConfig + Zone = "$i" + } +} +New-AzVM @vm -asjob ``` The deployments of the virtual machines and bastion host are submitted as PowerShell jobs. To view the status of the jobs, use [Get-Job](/powershell/module/microsoft.powershell.core/get-job): The extension runs `PowerShell Add-WindowsFeature Web-Server` to install the IIS ## For loop with variable to install custom script extension on virtual machines. ## for ($i=1; $i -le 2; $i++) {-$ext = @{ - Publisher = 'Microsoft.Compute' - ExtensionType = 'CustomScriptExtension' - ExtensionName = 'IIS' - ResourceGroupName = 'CreateIntLBQS-rg' - VMName = "myVM$i" - Location = 'eastus' - TypeHandlerVersion = '1.8' - SettingString = '{"commandToExecute":"powershell Add-WindowsFeature Web-Server; powershell Add-Content -Path \"C:\\inetpub\\wwwroot\\Default.htm\" -Value $($env:computername)"}' -} -Set-AzVMExtension @ext -AsJob + $ext = @{ + Publisher = 'Microsoft.Compute' + ExtensionType = 'CustomScriptExtension' + ExtensionName = 'IIS' + ResourceGroupName = 'CreateIntLBQS-rg' + VMName = "myVM$i" + Location = 'eastus' + TypeHandlerVersion = '1.8' + SettingString = '{"commandToExecute":"powershell Add-WindowsFeature Web-Server; powershell Add-Content -Path \"C:\\inetpub\\wwwroot\\Default.htm\" -Value $($env:computername)"}' + } + Set-AzVMExtension @ext -AsJob } ``` New-AzVM @vm 7. Open **Internet Explorer** on **myTestVM**. -8. Enter the IP address from the previous step into the address bar of the browser. The default page of IIS Web server is displayed on the browser. +8. Enter the IP address from the previous step into the address bar of the browser. The custom IIS Web server page is displayed. - :::image type="content" source="./media/quickstart-load-balancer-standard-internal-portal/load-balancer-test.png" alt-text="Create a standard internal load balancer" border="true"::: + :::image type="content" source="./media/quickstart-load-balancer-standard-internal-portal/load-balancer-test.png" alt-text="Screenshot of web browser showing default web page for load balanced VM" border="true"::: -To see the load balancer distribute traffic across all three VMs, you can customize the default page of each VM's IIS Web server and then force-refresh your web browser from the client machine. +To see the load balancer distribute traffic across all three VMs, you can force-refresh your web browser from the test machine. ## Clean up resources |
logic-apps | Azure Arc Enabled Logic Apps Create Deploy Workflows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/azure-arc-enabled-logic-apps-create-deploy-workflows.md | The following example describes a sample App Service plan resource definition th "type": "CustomLocation" }, "sku": {- "tier": "K1", - "name": "Kubernetes", + "tier": "Kubernetes", + "name": "K1", "capacity": 1 }, "properties": { |
logic-apps | Logic Apps Using Sap Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-using-sap-connector.md | Along with simple string and number inputs, the SAP connector accepts the follow ### SAP built-in connector -The preview SAP built-in connector trigger named **Register SAP RFC server for trigger** is available in the Azure portal, but the trigger currently can't receive calls from SAP when deployed in Azure. To fire the trigger, you can run the workflow locally in Visual Studio Code. For Visual Studio Code setup requirements and more information, see [Create a Standard logic app workflow in single-tenant Azure Logic Apps using Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md). +The preview SAP built-in connector trigger named **Register SAP RFC server for trigger** is available in the Azure portal, but the trigger currently can't receive calls from SAP when deployed in Azure. To fire the trigger, you can run the workflow locally in Visual Studio Code. For Visual Studio Code setup requirements and more information, see [Create a Standard logic app workflow in single-tenant Azure Logic Apps using Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md). You must also set up the following environment variables on the computer where you install Visual Studio Code: + + - **WEBSITE_PRIVATE_IP**: Set this environment variable value to **127.0.0.1** as the localhost address. + - **WEBSITE_PRIVATE_PORTS**: Set this environment variable value to two free and usable ports on your local computer, separating the values with a comma (**,**), for example, **8080,8088**. ## Prerequisites For a Standard workflow in single-tenant Azure Logic Apps, use the preview SAP * #### SAP trigger requirements -The preview SAP built-in connector trigger named **Register SAP RFC server for trigger** is available in the Azure portal, but the trigger currently can't receive calls from SAP when deployed in Azure. To fire the trigger, you can run the workflow locally in Visual Studio Code. For Visual Studio Code setup requirements and more information, see [Create a Standard logic app workflow in single-tenant Azure Logic Apps using Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md). +The preview SAP built-in connector trigger named **Register SAP RFC server for trigger** is available in the Azure portal, but the trigger currently can't receive calls from SAP when deployed in Azure. To fire the trigger, you can run the workflow locally in Visual Studio Code. For Visual Studio Code setup requirements and more information, see [Create a Standard logic app workflow in single-tenant Azure Logic Apps using Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md). You must also set up the following environment variables on the computer where you install Visual Studio Code: + + - **WEBSITE_PRIVATE_IP**: Set this environment variable value to **127.0.0.1** as the localhost address. + - **WEBSITE_PRIVATE_PORTS**: Set this environment variable value to two free and usable ports on your local computer, separating the values with a comma (**,**), for example, **8080,8088**. ### [ISE](#tab/ise) The following screenshot shows the example query's traces results table: ## Next steps -* [Create example workflows for common SAP scenarios](sap-create-example-scenario-workflows.md) +* [Create example workflows for common SAP scenarios](sap-create-example-scenario-workflows.md) |
logic-apps | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Logic Apps description: Lists Azure Policy Regulatory Compliance controls available for Azure Logic Apps. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Last updated 02/14/2023 -- |
machine-learning | Component Reference V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference-v2/component-reference-v2.md | You can navigate to Custom components in Azure Machine Learning Studio as shown Each component represents a set of code that can run independently and perform a machine learning task, given the required inputs. A component might contain a particular algorithm, or perform a task that is important in machine learning, such as missing value replacement, or statistical analysis. For help with choosing algorithms, see -* [How to select algorithms](..//how-to-select-algorithms.md) +* [How to select algorithms](../v1/how-to-select-algorithms.md) > [!TIP] > In any pipeline in the designer, you can get information about a specific component. Select the **Learn more** link in the component card when hovering on the component in the component list, or in the right pane of the component. For help with choosing algorithms, see ## Next steps -* [Tutorial: Build a model in designer to predict auto prices](../tutorial-designer-automobile-price-train-score.md) +* [Tutorial: Build a model in designer to predict auto prices](../v1/tutorial-designer-automobile-price-train-score.md) |
machine-learning | Component Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/component-reference.md | This reference content provides the technical background on each of the classic Each component represents a set of code that can run independently and perform a machine learning task, given the required inputs. A component might contain a particular algorithm, or perform a task that is important in machine learning, such as missing value replacement, or statistical analysis. For help with choosing algorithms, see -* [How to select algorithms](../how-to-select-algorithms.md) +* [How to select algorithms](../v1/how-to-select-algorithms.md) * [Azure Machine Learning Algorithm Cheat Sheet](../v1/algorithm-cheat-sheet.md) > [!TIP] If you directly deploy real-time endpoint from a previous completed real-time in ## Next steps -* [Tutorial: Build a model in designer to predict auto prices](../tutorial-designer-automobile-price-train-score.md) +* [Tutorial: Build a model in designer to predict auto prices](../v1/tutorial-designer-automobile-price-train-score.md) |
machine-learning | Graph Search Syntax | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/graph-search-syntax.md | Graph search uses Lucene simple query as full-text search syntax on node "name" Filter queries use the following pattern: -`**[key1] [operator1] [value1]; [key2] [operator1] [value2];**` +`[key1] [operator1] [value1]; [key2] [operator1] [value2];` You can use the following node properties as keys: And use the following operators: - If `>=, >, <, or <=` is chosen, values will automatically be converted to number type. Otherwise, string types are used for comparison. - For all string type values, case is insensitive in comparison. - Operator "In" expects a collection as value, collection syntax is `{name1, name2, name3}`-- Space will be ignored between keywords+- Space will be ignored between keywords |
machine-learning | Score Image Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/score-image-model.md | After you have generated a set of scores using [Score Image Model](score-image-m ### Publish scores as a web service -A common use of scoring is to return the output as part of a predictive web service. For more information, see [this tutorial](../tutorial-designer-automobile-price-deploy.md) on how to deploy a real-time endpoint based on a pipeline in Azure Machine Learning designer. +A common use of scoring is to return the output as part of a predictive web service. For more information, see [this tutorial](../v1/tutorial-designer-automobile-price-deploy.md) on how to deploy a real-time endpoint based on a pipeline in Azure Machine Learning designer. ## Next steps |
machine-learning | Score Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/score-model.md | The score, or predicted value, can be in many different formats, depending on th ## Publish scores as a web service -A common use of scoring is to return the output as part of a predictive web service. For more information, see [this tutorial](../tutorial-designer-automobile-price-deploy.md) on how to deploy a real-time endpoint based on a pipeline in Azure Machine Learning designer. +A common use of scoring is to return the output as part of a predictive web service. For more information, see [this tutorial](../v1/tutorial-designer-automobile-price-deploy.md) on how to deploy a real-time endpoint based on a pipeline in Azure Machine Learning designer. ## Next steps |
machine-learning | Web Service Input Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/web-service-input-output.md | The Web Service Input component indicates where user data enters the pipeline. T ## How to use Web Service Input and Output -When you [create a real-time inference pipeline](../tutorial-designer-automobile-price-deploy.md#create-a-real-time-inference-pipeline) from your training pipeline, the Web Service Input and Web Service Output components will be automatically added to show where user data enters the pipeline and where data is returned. +When you [create a real-time inference pipeline](../v1/tutorial-designer-automobile-price-deploy.md#create-a-real-time-inference-pipeline) from your training pipeline, the Web Service Input and Web Service Output components will be automatically added to show where user data enters the pipeline and where data is returned. > [!NOTE] > Automatically generating a real-time inference pipeline is a rule-based, best-effort process. There's no guarantee of correctness. The following example shows how to manually create real-time inference pipeline  -After you submit the pipeline and the run finishes successfully, you can [deploy the real-time endpoint](../tutorial-designer-automobile-price-deploy.md#deploy-the-real-time-endpoint). +After you submit the pipeline and the run finishes successfully, you can [deploy the real-time endpoint](../v1/tutorial-designer-automobile-price-deploy.md#deploy-the-real-time-endpoint). > [!NOTE] > In the preceding example, **Enter Data Manually** provides the data schema for web service input and is necessary for deploying the real-time endpoint. Generally, you should always connect a component or dataset to the port where **Web Service Input** is connected to provide the data schema. ## Next steps-Learn more about [deploying the real-time endpoint](../tutorial-designer-automobile-price-deploy.md#deploy-the-real-time-endpoint). +Learn more about [deploying the real-time endpoint](../v1/tutorial-designer-automobile-price-deploy.md#deploy-the-real-time-endpoint). See the [set of components available](component-reference.md) to Azure Machine Learning. |
machine-learning | Concept Train Machine Learning Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-train-machine-learning-model.md | The Azure training lifecycle consists of: The designer lets you train models using a drag and drop interface in your web browser. + [What is the designer?](concept-designer.md)-+ [Tutorial: Predict automobile price](tutorial-designer-automobile-price-train-score.md) ## Azure CLI |
machine-learning | Concept Workspace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-workspace.md | To get started with Azure Machine Learning, see: + [Recover a workspace after deletion (soft-delete)](concept-soft-delete.md) + [Get started with Azure Machine Learning](quickstart-create-resources.md) + [Tutorial: Create your first classification model with automated machine learning](tutorial-first-experiment-automated-ml.md) -+ [Tutorial: Predict automobile price with the designer](tutorial-designer-automobile-price-train-score.md) |
machine-learning | How To Access Resources From Endpoints Managed Identities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-resources-from-endpoints-managed-identities.md | Delete the User-assigned managed identity: * [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md). * For more on deployment, see [Safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md).-* For more information on using the CLI, see [Use the CLI extension for Azure Machine Learning](reference-azure-machine-learning-cli.md). +* For more information on using the CLI, see [Use the CLI extension for Azure Machine Learning](how-to-configure-cli.md). * To see which compute resources you can use, see [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md). * For more on costs, see [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md). * For information on monitoring endpoints, see [Monitor managed online endpoints](how-to-monitor-online-endpoints.md). |
machine-learning | How To Configure Network Isolation With V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-network-isolation-with-v2.md | In this article, you'll learn about network isolation changes with our new v2 AP ## Prerequisites -* The [Azure Machine Learning Python SDK v1](/python/api/overview/azure/ml/install) or [Azure CLI extension for machine learning v1](reference-azure-machine-learning-cli.md). +* The [Azure Machine Learning Python SDK v1](/python/api/overview/azure/ml/install) or [Azure CLI extension for machine learning v1](./v1/reference-azure-machine-learning-cli.md). > [!IMPORTANT] > The v1 extension (`azure-cli-ml`) version must be 1.41.0 or greater. Use the `az version` command to view version information. The Azure Machine Learning CLI v2 uses our new v2 API platform. New features suc As mentioned in the previous section, there are two types of operations; with ARM and with the workspace. With the __legacy v1 API__, most operations used the workspace. With the v1 API, adding a private endpoint to the workspace provided network isolation for everything except CRUD operations on the workspace or compute resources. -With the __new v2 API__, most operations use ARM. So enabling a private endpoint on your workspace doesn't provide the same level of network isolation. Operations that use ARM communicate over public networks, and include any metadata (such as your resource IDs) or parameters used by the operation. For example, the [create or update job](/rest/api/azureml/2022-10-01/jobs/create-or-update) api sends metadata, and [parameters](./reference-yaml-job-command.md). +With the __new v2 API__, most operations use ARM. So enabling a private endpoint on your workspace doesn't provide the same level of network isolation. Operations that use ARM communicate over public networks, and include any metadata (such as your resource IDs) or parameters used by the operation. For example, the [create or update job](/rest/api/azureml/2023-04-01/jobs/create-or-update) api sends metadata, and [parameters](./reference-yaml-job-command.md). > [!IMPORTANT] > For most people, using the public ARM communications is OK: ws.update(v1_legacy_mode=False) # [Azure CLI extension v1](#tab/azurecliextensionv1) -The Azure CLI [extension v1 for machine learning](reference-azure-machine-learning-cli.md) provides the [az ml workspace update](/cli/azure/ml(v1)/workspace#az-ml(v1)-workspace-update) command. To disable the parameter for a workspace, add the parameter `--v1-legacy-mode False`. +The Azure CLI [extension v1 for machine learning](./v1/reference-azure-machine-learning-cli.md) provides the [az ml workspace update](/cli/azure/ml(v1)/workspace#az-ml(v1)-workspace-update) command. To disable the parameter for a workspace, add the parameter `--v1-legacy-mode False`. > [!IMPORTANT] > The `v1-legacy-mode` parameter is only available in version 1.41.0 or newer of the Azure CLI extension for machine learning v1 (`azure-cli-ml`). Use the `az version` command to view version information. |
machine-learning | How To Connection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-connection.md | This YAML script creates an Azure SQL DB connection. Be sure to update the appro # my_sqldb_connection.yaml $schema: http://azureml/sdk-2-0/Connection.json -type: azuresqldb +type: azure_sql_db name: my_sqldb_connection target: Server=tcp:<myservername>,<port>;Database=<mydatabase>;Trusted_Connection=False;Encrypt=True;Connection Timeout=30 |
machine-learning | How To Create Attach Compute Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-cluster.md | In this article, learn how to: * An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md). -* The [Azure CLI extension for Machine Learning service (v2)](reference-azure-machine-learning-cli.md), [Azure Machine Learning Python SDK](/python/api/overview/azure/ai-ml-readme), or the [Azure Machine Learning Visual Studio Code extension](how-to-setup-vs-code.md). +* The [Azure CLI extension for Machine Learning service (v2)](how-to-configure-cli.md), [Azure Machine Learning Python SDK](/python/api/overview/azure/ai-ml-readme), or the [Azure Machine Learning Visual Studio Code extension](how-to-setup-vs-code.md). * If using the Python SDK, [set up your development environment with a workspace](how-to-configure-environment.md). Once your environment is set up, attach to the workspace in your Python script: |
machine-learning | How To Create Component Pipeline Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipeline-python.md | If you don't have an Azure subscription, create a free account before you begin. This article uses the Python SDK for Azure Machine Learning to create and control an Azure Machine Learning pipeline. The article assumes that you'll be running the code snippets interactively in either a Python REPL environment or a Jupyter notebook. -This article is based on the [image_classification_keras_minist_convnet.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb) notebook found in the `sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet` directory of the [Azure Machine Learning Examples](https://github.com/azure/azureml-examples) repository. +This article is based on the [image_classification_keras_minist_convnet.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb) notebook found in the `sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet` directory of the [Azure Machine Learning Examples](https://github.com/azure/azureml-examples) repository. ## Import required libraries The image classification task can be split into three steps: prepare data, train [Azure Machine Learning component](concept-component.md) is a self-contained piece of code that does one step in a machine learning pipeline. In this article, you'll create three components for the image classification task: - Prepare data for training and test-- Train a neural networking for image classification using training data+- Train a neural network for image classification using training data - Score the model using test data -For each component, you need to prepare the following staff: +For each component, you need to prepare the following: 1. Prepare the Python script containing the execution logic If you're following along with the example in the [Azure Machine Learning Exampl #### Define component using Python function -By using command_component() function as a decorator, you can easily define the component's interface, metadata and code to execute from a Python function. Each decorated Python function will be transformed into a single static specification (YAML) that the pipeline service can process. +By using `command_component()` function as a decorator, you can easily define the component's interface, metadata and code to execute from a Python function. Each decorated Python function will be transformed into a single static specification (YAML) that the pipeline service can process. :::code language="python" source="~/azureml-examples-main/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/prep/prep_component.py"::: The `train.py` file contains a normal Python function, which performs the traini #### Define component using Python function -After defining the training function successfully, you can use @command_component in Azure Machine Learning SDK v2 to wrap your function as a component, which can be used in Azure Machine Learning pipelines. +After defining the training function successfully, you can use `@command_component` in Azure Machine Learning SDK v2 to wrap your function as a component, which can be used in Azure Machine Learning pipelines. :::code language="python" source="~/azureml-examples-main/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/train/train_component.py"::: |
machine-learning | How To Manage Rest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-rest.md | providers/Microsoft.MachineLearningServices/workspaces/<YOUR-WORKSPACE-NAME>/com -H "Authorization:Bearer <YOUR-ACCESS-TOKEN>" ``` -To create or overwrite a named compute resource, you'll use a PUT request. In the following, in addition to the now-familiar replacements of `YOUR-SUBSCRIPTION-ID`, `YOUR-RESOURCE-GROUP`, `YOUR-WORKSPACE-NAME`, and `YOUR-ACCESS-TOKEN`, replace `YOUR-COMPUTE-NAME`, and values for `location`, `vmSize`, `vmPriority`, `scaleSettings`, `adminUserName`, and `adminUserPassword`. As specified in the reference at [Machine Learning Compute - Create Or Update SDK Reference](/rest/api/azureml/2022-10-01/workspaces/create-or-update), the following command creates a dedicated, single-node Standard_D1 (a basic CPU compute resource) that will scale down after 30 minutes: +To create or overwrite a named compute resource, you'll use a PUT request. In the following, in addition to the now-familiar replacements of `YOUR-SUBSCRIPTION-ID`, `YOUR-RESOURCE-GROUP`, `YOUR-WORKSPACE-NAME`, and `YOUR-ACCESS-TOKEN`, replace `YOUR-COMPUTE-NAME`, and values for `location`, `vmSize`, `vmPriority`, `scaleSettings`, `adminUserName`, and `adminUserPassword`. As specified in the reference at [Machine Learning Compute - Create Or Update SDK Reference](/rest/api/azureml/2023-04-01/workspaces/create-or-update), the following command creates a dedicated, single-node Standard_D1 (a basic CPU compute resource) that will scale down after 30 minutes: ```bash curl -X PUT \ The Azure Machine Learning workspace uses Azure Container Registry (ACR) for som ## Next steps - Explore the complete [Azure Machine Learning REST API reference](/rest/api/azureml/).-- Learn how to use the designer to [Predict automobile price with the designer](./tutorial-designer-automobile-price-train-score.md).-- Explore [Azure Machine Learning with Jupyter notebooks](..//machine-learning/samples-notebooks.md).+- Explore [Azure Machine Learning with Jupyter notebooks](../machine-learning/samples-notebooks.md). |
machine-learning | How To Secure Workspace Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-workspace-vnet.md | When your Azure Machine Learning workspace is configured with a private endpoint When ACR is behind a virtual network, Azure Machine Learning can't use it to directly build Docker images. Instead, the compute cluster is used to build the images. > [!IMPORTANT]-> The compute cluster used to build Docker images needs to be able to access the package repositories that are used to train and deploy your models. You may need to add network security rules that allow access to public repos, [use private Python packages](how-to-use-private-python-packages.md), or use [custom Docker images (SDK v1)](v1/how-to-train-with-custom-image.md?view=azureml-api-1&preserve-view=true) that already include the packages. +> The compute cluster used to build Docker images needs to be able to access the package repositories that are used to train and deploy your models. You may need to add network security rules that allow access to public repos, [use private Python packages](concept-vulnerability-management.md#using-a-private-package-repository), or use [custom Docker images (SDK v1)](v1/how-to-train-with-custom-image.md?view=azureml-api-1&preserve-view=true) that already include the packages. > [!WARNING] > If your Azure Container Registry uses a private endpoint or service endpoint to communicate with the virtual network, you cannot use a managed identity with an Azure Machine Learning compute cluster. |
machine-learning | Overview What Is Azure Machine Learning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/overview-what-is-azure-machine-learning.md | The [Azure Machine Learning studio](https://ml.azure.com) offers multiple author :::image type="content" source="media/overview-what-is-azure-machine-learning/metrics.png" alt-text="Screenshot of metrics for a training run."::: -* Azure Machine Learning designer: use the designer to train and deploy machine learning models without writing any code. Drag and drop datasets and components to create ML pipelines. Try out the [designer tutorial](tutorial-designer-automobile-price-train-score.md). +* Azure Machine Learning designer: use the designer to train and deploy machine learning models without writing any code. Drag and drop datasets and components to create ML pipelines. * Automated machine learning UI: Learn how to create [automated ML experiments](tutorial-first-experiment-automated-ml.md) with an easy-to-use interface. |
machine-learning | Reference Checkpoint Performance For Large Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-checkpoint-performance-for-large-models.md | Learn how to boost checkpoint speed and reduce checkpoint cost for large Azure M ## Overview -Azure Container for PyTorch (ACPT) now includes **Nebula**, a fast, simple, disk-less, model-aware checkpoint tool. Nebula offers a simple, high-speed checkpointing solution for distributed large-scale model training jobs using PyTorch. By utilizing the latest distributed computing technologies, Nebula can reduce checkpoint times from hours to seconds - potentially saving 95% to 99.9% of time. Large-scale training jobs can greatly benefit from NebulaΓÇÖs performance. +Azure Container for PyTorch (ACPT) now includes **Nebula**, a fast, simple, disk-less, model-aware checkpoint tool. Nebula offers a simple, high-speed checkpointing solution for distributed large-scale model training jobs using PyTorch. By utilizing the latest distributed computing technologies, Nebula can reduce checkpoint times from hours to seconds - potentially saving 95% to 99.9% of time. Large-scale training jobs can greatly benefit from Nebula's performance. To make Nebula available for your training jobs, import the `nebulaml` python package in your script. Nebula has full compatibility with different distributed PyTorch training strategies, including PyTorch Lightning, DeepSpeed, and more. The Nebula API offers a simple way to monitor and view checkpoint lifecycles. The APIs support various model types, and ensure checkpoint consistency and reliability. With Nebula you can: * An Azure Machine Learning compute target. See [Manage training & deploy computes](./how-to-create-attach-compute-studio.md) to learn more about compute target creation * A training script that uses **PyTorch**. * ACPT-curated (Azure Container for PyTorch) environment. See [Curated environments](resource-curated-environments.md#azure-container-for-pytorch-acpt) to obtain the ACPT image. Learn how to [use the curated environment](./how-to-use-environments.md)-* An Azure Machine Learning script run configuration file. If you donΓÇÖt have one, you can follow [this resource](./how-to-set-up-training-targets.md) +* An Azure Machine Learning script run configuration file. If you don't have one, you can follow [this resource](./v1/how-to-set-up-training-targets.md) ## How to Use Nebula |
machine-learning | Troubleshooting Managed Feature Store | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/troubleshooting-managed-feature-store.md | When you create or update a feature store, you may encounter the following issue - [ARM Throttling Error](#arm-throttling-error) - [RBAC Permission Errors](#rbac-permission-errors) - [Duplicated Materialization Identity ARM ID Issue](#duplicated-materialization-identity-arm-id-issue)+- [Older versions of `azure-mgmt-authorization` package doesn't work with `AzureMLOnBehalfOfCredential`](#older-versions-of-azure-mgmt-authorization-package-doesnt-work-with-azuremlonbehalfofcredential) ### ARM Throttling Error If the user doesn't have the required roles, the deployment fails. The error res Grant the `Contributor` and `User Access Administrator` roles to the user on the resource group where the feature store is to be created and instruct the user to run the deployment again. -For more details, see [Permissions required for the `feature store materialization managed identity` role](how-to-setup-access-control-feature-store.md#permissions-required-for-the-feature-store-materialization-managed-identity-role). -+For more information, see [Permissions required for the `feature store materialization managed identity` role](how-to-setup-access-control-feature-store.md#permissions-required-for-the-feature-store-materialization-managed-identity-role). ### Duplicated materialization identity ARM ID issue Once the feature store is updated to enable materialization for the first time, #### Symptom -When updating the feature store using SDK/CLI, it fails with the following error message +When the feature store is updated using the SDK/CLI, the update fails with the following error message: Error: When the user-assigned managed identity is used by the feature store as its mate - (B): /subscriptions/{sub-id}/__resourceGroups__/{rg}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{your-uai} -The next time the user updates the feature store, if they use the same user-assigned managed identity as the materialization identity in the update request, while using the ARM ID in format (A), the update will fail with the error above. +When you update the feature store using the same user-assigned managed identity as the materialization identity in the update request, while using the ARM ID in format (A), the update will fail with the error above. To fix the issue, replace string `resourcegroups` with `resourceGroups` in the user-assigned managed identity ARM ID, and run feature store update again. +### Older versions of `azure-mgmt-authorization` package doesn't work with `AzureMLOnBehalfOfCredential` ++#### Symptom +When you use the `setup_storage_uai` script provided in the *featurestore_sample* folder in the azureml-examples repository, the script fails with the error message: ++`AttributeError: 'AzureMLOnBehalfOfCredential' object has no attribute 'signed_session'` ++#### Solution: +Check the version of the `azure-mgmt-authorization` package that is installed and make sure you're using a recent version, such as 3.0.0 or later. The old version, such as 0.61.0, doesn't work with `AzureMLOnBehalfOfCredential`. ++ ## Feature Set Spec Create Errors - [Invalid schema in feature set spec](#invalid-schema-in-feature-set-spec)-- [Cannot find transformation class](#cannot-find-transformation-class)+- [Can't find transformation class](#cant-find-transformation-class) - [FileNotFoundError on code folder](#filenotfounderror-on-code-folder) ### Invalid schema in feature set spec -Before registering a feature set into the feature store, users first define the feature set spec locally and run `<feature_set_spec>.to_spark_dataframe()` to validate it. +Before you register a feature set into the feature store, define the feature set spec locally and run `<feature_set_spec>.to_spark_dataframe()` to validate it. #### Symptom-When user runs `<feature_set_spec>.to_spark_dataframe()` , various schema validation failures may occur if the schema of the feature set dataframe is not aligned with the definition in the feature set spec. +When user runs `<feature_set_spec>.to_spark_dataframe()` , various schema validation failures may occur if the schema of the feature set dataframe isn't aligned with the definition in the feature set spec. For examples: - Error message: `azure.ai.ml.exceptions.ValidationException: Schema check errors, timestamp column: timestamp is not in output dataframe` Check the schema validation failure error, and update the feature set spec defin - update the `source.timestamp_column.name` property to define the timestamp column name correctly. - update the `index_columns` property to define the index columns correctly. - update the `features` property to define the feature column names and types correctly. +- If the feature source data is of type ΓÇ£csvΓÇ¥, make sure the CSV files are generated with column headers. Then run `<feature_set_spec>.to_spark_dataframe()` again to check if the validation is passed. If the feature set spec is defined using SDK, it's also recommended to use the ` Check the [Feature Set Spec schema](reference-yaml-featureset-spec.md) doc for more details. -### Cannot find transformation class +### Can't find transformation class #### Symptom When a user runs `<feature_set_spec>.to_spark_dataframe()`, it returns the following error `AttributeError: module '<...>' has no attribute '<...>'` And in this example, the `feature_transformation_code.path` property in the YAML - [Feature Retrieval Specification Resolving Errors](#feature-retrieval-specification-resolving-errors) - [File *feature_retrieval_spec.yaml* not found when using a model as input to the feature retrieval job](#file-feature_retrieval_specyaml-not-found-when-using-a-model-as-input-to-the-feature-retrieval-job)-- [[Observation Data is not Joined with any feature values](#observation-data-isnt-joined-with-any-feature-values)]+- [[Observation Data isn't Joined with any feature values](#observation-data-isnt-joined-with-any-feature-values)] - [User or Managed Identity not having proper RBAC permission on the feature store](#user-or-managed-identity-not-having-proper-rbac-permission-on-the-feature-store) - [User or Managed Identity not having proper RBAC permission to Read from the Source Storage or Offline store](#user-or-managed-identity-not-having-proper-rbac-permission-to-read-from-the-source-storage-or-offline-store) - [Training job fails to read data generated by the build-in Feature Retrieval Component](#training-job-fails-to-read-data-generated-by-the-build-in-feature-retrieval-component)+- [`generate_feature_retrieval_spec()` fails due to use of local feature set specification](#generate_feature_retrieval_spec-fails-due-to-use-of-local-feature-set-specification) +- [`get_offline_features() query` takes a long time](#get_offline_features-query-takes-a-long-time) When a feature retrieval job fails, check the error details by going to the **run detail page**, select the **Outputs + logs** tab, and check the file *logs/azureml/driver/stdout*. When you provide a model as input to the feature retrieval step, it expects that The fix the issue, package the `feature_retrieval_spec.yaml` in the root folder of the model artifact folder, before registering the model. -#### Observation Data isn't joined with any feature values +### Observation Data isn't joined with any feature values #### Symptom Training job fails with the error message that either FileNotFoundError: [Errno 2] No such file or directory ``` -- format is not correct.+- format isn't correct. ```json ParserError: And the output data is always in parquet format. Update the training script to read from the "data" sub folder, and read the data as parquet. +### `generate_feature_retrieval_spec()` fails due to use of local feature set specification ++#### Symptom: +If you run the following python code to generate a feature retrieval spec on a given list of features. ++```python +featurestore.generate_feature_retrieval_spec(feature_retrieval_spec_folder, features) +``` +You receive the error: ++`AttributeError: 'FeatureSetSpec' object has no attribute 'id'` ++#### Solution: ++A feature retrieval spec can only be generated using feature sets registered in Feature Store. If the features list contains features defined by a local feature set specification, the `generate_feature_retrieval_spec()` fails with the error message above. ++To fix the issue: ++- Register the local feature set specification as feature set in the feature store +- Get the registered the feature set +- Create feature lists again using only features from registered feature sets +- Generate the feature retrieval spec using the new features list +++### `get_offline_features() query` takes a long time ++#### Symptom: +Running `get_offline_features` to generate training data using a few features from feature store takes a long time to finish. ++#### Solutions: ++Check the following configurations: ++- For each feature set used in the query, does it have `temporal_join_lookback` set in the feature set specification. Set its value to a smaller value. +- If the size and timestamp window on the observation dataframe are large, configure the notebook session (or the job) to increase the size (memory and core) of driver and executor, and increase the number of executors. ++ ## Feature Materialization Job Errors - [Invalid Offline Store Configuration](#invalid-offline-store-configuration) - [Materialization Identity not having proper RBAC permission on the feature store](#materialization-identity-not-having-proper-rbac-permission-on-the-feature-store) - [Materialization Identity not having proper RBAC permission to Read from the Storage](#materialization-identity-not-having-proper-rbac-permission-to-read-from-the-storage) - [Materialization identity not having proper RBAC permission to write data to the offline store](#materialization-identity-not-having-proper-rbac-permission-to-write-data-to-the-offline-store)+- [Streaming job results to notebook fails](#streaming-job-results-to-notebook-fails) +- [Invalid Spark configuration](#invalid-spark-configuration) When the feature materialization job fails, user can follow these steps to check the job failure details. Assign the `Storage Blob Data Contributor` role on the offline store storage to `Storage Blob Data Contributor` is the minimum recommended access requirement. You can also assign roles like more privileges like `Storage Blob Data Owner`. For more information about RBAC configuration, see [Permissions required for the `feature store materialization managed identity` role](how-to-setup-access-control-feature-store.md#permissions-required-for-the-feature-store-materialization-managed-identity-role)..++### Streaming job results to notebook fails ++#### Symptom: ++When using the feature store CRUD client to stream materialization job results to notebook using `fs_client.jobs.stream(ΓÇ£<job_id>ΓÇ¥)`, the SDK call fails with an error +``` +HttpResponseError: (UserError) A job was found, but it is not supported in this API version and cannot be accessed. ++Code: UserError ++Message: A job was found, but it is not supported in this API version and cannot be accessed. +``` +#### Solution: ++When the materialization job is created (for example, by a backfill call), it may take a few seconds for the job the to properly initialize. Run the `jobs.stream()` command again in a few seconds. The issue should be gone. ++### Invalid Spark configuration ++#### Symptom: ++A materialization job fails with the following error message: ++```python +Synapse job submission failed due to invalid spark configuration request ++{ ++"Message":"[..] Either the cores or memory of the driver, executors exceeded the SparkPool Node Size.\nRequested Driver Cores:[4]\nRequested Driver Memory:[36g]\nRequested Executor Cores:[4]\nRequested Executor Memory:[36g]\nSpark Pool Node Size:[small]\nSpark Pool Node Memory:[28]\nSpark Pool Node Cores:[4]" ++} +``` ++#### Solution: ++Update the `materialization_settings.spark_configuration{}` of the feature set. Make sure the following parameters are using memory size and number of cores less than what is provided by the instance type (defined by `materialization_settings.resource`) ++`spark.driver.cores` +`spark.driver.memory` +`spark.executor.cores` +`spark.executor.memory` ++For example, on instance type *standard_e8s_v3*, the following spark configuration one of the valid options. ++ +```python ++transactions_fset_config.materialization_settings = MaterializationSettings( ++ offline_enabled=True, ++ resource = MaterializationComputeResource(instance_type="standard_e8s_v3"), ++ spark_configuration = { ++ "spark.driver.cores": 4, ++ "spark.driver.memory": "36g", ++ "spark.executor.cores": 4, ++ "spark.executor.memory": "36g", ++ "spark.executor.instances": 2 ++ }, ++ schedule = None, ++) ++fs_poller = fs_client.feature_sets.begin_create_or_update(transactions_fset_config) ++``` |
machine-learning | Tutorial Get Started With Feature Store | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-get-started-with-feature-store.md | Note: This tutorial uses Azure Machine Learning spark notebook for development. You can also download a zip file from the [examples repository (azureml-examples)](https://github.com/azure/azureml-examples). At this page, first select the `code` dropdown, and then select `Download ZIP`. Then, unzip the contents into a folder on your local device. 1. Upload the feature store samples directory to project workspace.- * Open Azure Machine Learning studio UI of your Azure Machine Learning workspace + * Open the [Azure Machine Learning studio UI](https://ml.azure.com/) resource of your Azure Machine Learning workspace * Select **Notebooks** in left nav * Select your user name in the directory listing * Select **upload folder** |
machine-learning | Concept Azure Machine Learning Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-azure-machine-learning-architecture.md | Here are the details: [](media/concept-azure-machine-learning-architecture/inferencing.png#lightbox) -For an example of deploying a model as a web service, see [Tutorial: Train and deploy a model](../tutorial-train-deploy-notebook.md). +For an example of deploying a model as a web service, see [Tutorial: Train and deploy a model](tutorial-train-deploy-notebook.md). #### Real-time endpoints -When you deploy a trained model in the designer, you can [deploy the model as a real-time endpoint](../tutorial-designer-automobile-price-deploy.md). A real-time endpoint commonly receives a single request via the REST endpoint and returns a prediction in real-time. This is in contrast to batch processing, which processes multiple values at once and saves the results after completion to a datastore. +When you deploy a trained model in the designer, you can [deploy the model as a real-time endpoint](tutorial-designer-automobile-price-deploy.md). A real-time endpoint commonly receives a single request via the REST endpoint and returns a prediction in real-time. This is in contrast to batch processing, which processes multiple values at once and saves the results after completion to a datastore. #### Pipeline endpoints A pipeline endpoint is a collection of published pipelines. This logical organiz ### Azure Machine Learning CLI -The [Azure Machine Learning CLI](../how-to-configure-cli.md) is an extension to the Azure CLI, a cross-platform command-line interface for the Azure platform. This extension provides commands to automate your machine learning activities. +The [Azure Machine Learning CLI v1](reference-azure-machine-learning-cli.md) is an extension to the Azure CLI, a cross-platform command-line interface for the Azure platform. This extension provides commands to automate your machine learning activities. ### ML Pipelines Pipeline steps are reusable, and can be run without rerunning the previous steps Azure Machine Learning provides the following monitoring and logging capabilities: * For **Data Scientists**, you can monitor your experiments and log information from your training runs. For more information, see the following articles:- * [Start, monitor, and cancel training runs](../how-to-track-monitor-analyze-runs.md) - * [Log metrics for training runs](../how-to-log-view-metrics.md) - * [Track experiments with MLflow](../how-to-use-mlflow.md) + * [Start, monitor, and cancel training runs](how-to-track-monitor-analyze-runs.md) + * [Log metrics for training runs](how-to-log-view-metrics.md) + * [Track experiments with MLflow](how-to-use-mlflow.md) * [Visualize runs with TensorBoard](how-to-monitor-tensorboard.md) * For **Administrators**, you can monitor information about the workspace, related Azure resources, and events such as resource creation and deletion by using Azure Monitor. For more information, see [How to monitor Azure Machine Learning](../monitor-azure-machine-learning.md).-* For **DevOps** or **MLOps**, you can monitor information generated by models deployed as web services to identify problems with the deployments and gather data submitted to the service. For more information, see [Collect model data](how-to-enable-data-collection.md) and [Monitor with Application Insights](../how-to-enable-app-insights.md). +* For **DevOps** or **MLOps**, you can monitor information generated by models deployed as web services to identify problems with the deployments and gather data submitted to the service. For more information, see [Collect model data](how-to-enable-data-collection.md) and [Monitor with Application Insights](how-to-enable-app-insights.md). ## Interacting with your workspace Azure Machine Learning provides the following monitoring and logging capabilitie The studio is also where you access the interactive tools that are part of Azure Machine Learning: -+ [Azure Machine Learning designer](../concept-designer.md) to perform workflow steps without writing code -+ Web experience for [automated machine learning](../concept-automated-ml.md) ++ [Azure Machine Learning designer](concept-designer.md) to perform workflow steps without writing code++ Web experience for [automated machine learning](concept-automated-ml-v1.md) + [Azure Machine Learning notebooks](../how-to-run-jupyter-notebooks.md) to write and run your own code in integrated Jupyter notebook servers. + Data labeling projects to create, manage, and monitor projects for labeling [images](../how-to-create-image-labeling-projects.md) or [text](../how-to-create-text-labeling-projects.md). The studio is also where you access the interactive tools that are part of Azure > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). + Interact with the service in any Python environment with the [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro).-+ Use [Azure Machine Learning designer](../concept-designer.md) to perform the workflow steps without writing code. -+ Use [Azure Machine Learning CLI](../how-to-configure-cli.md) for automation. ++ Use [Azure Machine Learning designer](concept-designer.md) to perform the workflow steps without writing code. ++ Use [Azure Machine Learning CLI](reference-azure-machine-learning-cli.md) for automation. ## Next steps To get started with Azure Machine Learning, see: * [What is Azure Machine Learning?](../overview-what-is-azure-machine-learning.md) * [Create an Azure Machine Learning workspace](../quickstart-create-resources.md)-* [Tutorial: Train and deploy a model](../tutorial-train-deploy-notebook.md) +* [Tutorial: Train and deploy a model](tutorial-train-deploy-notebook.md) |
machine-learning | Concept Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-data.md | With datasets, you can accomplish a number of machine learning tasks through sea + Create a [data labeling project](#label-data-with-data-labeling-projects). + Train machine learning models: + [automated ML experiments](../how-to-use-automated-ml-for-ml-models.md)- + the [designer](../tutorial-designer-automobile-price-train-score.md#import-data) + + the [designer](tutorial-designer-automobile-price-train-score.md#import-data) + [notebooks](how-to-train-with-datasets.md) + [Azure Machine Learning pipelines](how-to-create-machine-learning-pipelines.md) + Access datasets for scoring with [batch inference](../tutorial-pipeline-batch-scoring-classification.md) in [machine learning pipelines](how-to-create-machine-learning-pipelines.md). |
machine-learning | Concept Model Management And Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-model-management-and-deployment.md | To deploy the model as a web service, you must provide the following items: For more information, see [Deploy models](how-to-deploy-and-where.md). -#### Controlled rollout --When deploying to Azure Kubernetes Service, you can use controlled rollout to enable the following scenarios: --* Create multiple versions of an endpoint for a deployment -* Perform A/B testing by routing traffic to different versions of the endpoint. -* Switch between endpoint versions by updating the traffic percentage in endpoint configuration. --For more information, see [Controlled rollout of ML models](../how-to-deploy-azure-kubernetes-service.md#deploy-models-to-aks-using-controlled-rollout-preview). - ### Analytics Microsoft Power BI supports using machine learning models for data analytics. For more information, see [Azure Machine Learning integration in Power BI (preview)](/power-bi/service-machine-learning-integration). There is no universal answer to "How do I know if I should retrain?" but Azure M - Compare the outputs of your new model to those of your old model - Use predefined criteria to choose whether to replace your old model -A theme of the above steps is that your retraining should be automated, not ad hoc. [Azure Machine Learning pipelines](../concept-ml-pipelines.md) are a good answer for creating workflows relating to data preparation, training, validation, and deployment. Read [Retrain models with Azure Machine Learning designer](../how-to-retrain-designer.md) to see how pipelines and the Azure Machine Learning designer fit into a retraining scenario. +A theme of the above steps is that your retraining should be automated, not ad hoc. [Azure Machine Learning pipelines](../concept-ml-pipelines.md) are a good answer for creating workflows relating to data preparation, training, validation, and deployment. Read [Retrain models with Azure Machine Learning designer](how-to-retrain-designer.md) to see how pipelines and the Azure Machine Learning designer fit into a retraining scenario. ## Automate the ML lifecycle |
machine-learning | How To Deploy Model Designer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-model-designer.md | Deployment in the studio consists of the following steps: 1. (Optional) Configure the entry script. 1. Deploy the model to a compute target. -You can also deploy models directly in the designer to skip model registration and file download steps. This can be useful for rapid deployment. For more information see, [Deploy a model with the designer](../tutorial-designer-automobile-price-deploy.md). +You can also deploy models directly in the designer to skip model registration and file download steps. This can be useful for rapid deployment. For more information see, [Deploy a model with the designer](tutorial-designer-automobile-price-deploy.md). Models trained in the designer can also be deployed through the SDK or command-line interface (CLI). For more information, see [Deploy your existing model with Azure Machine Learning](how-to-deploy-and-where.md). score_params = dict( ## Next steps -* [Train a model in the designer](../tutorial-designer-automobile-price-train-score.md) +* [Train a model in the designer](tutorial-designer-automobile-price-train-score.md) * [Deploy models with Azure Machine Learning SDK](how-to-deploy-and-where.md) * [Troubleshoot a failed deployment](how-to-troubleshoot-deployment.md) * [Deploy to Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md) |
machine-learning | How To Designer Import Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-designer-import-data.md | If your workspace is in a virtual network, you must perform additional configura ## Next steps -Learn the designer fundamentals with this [Tutorial: Predict automobile price with the designer](../tutorial-designer-automobile-price-train-score.md). +Learn the designer fundamentals with this [Tutorial: Predict automobile price with the designer](tutorial-designer-automobile-price-train-score.md). |
migrate | Concepts Vmware Agentless Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-vmware-agentless-migration.md | |
mysql | Concepts Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-monitoring.md | These metrics are available for Azure Database for MySQL: |Metric display name|Metric|Unit|Description| |||||-|Replication Lag|replication_lag|Seconds|Replication lag is the number of seconds the replica is behind in replaying the transactions received from the source server. This metric is calculated from "Seconds_behind_Master" from the command "SHOW SLAVE STATUS" and is available for replica servers only. For more information, see "[Monitor replication latency](../single-server/how-to-troubleshoot-replication-latency.md)"| +|Replication Lag|replication_lag|Seconds|Replication lag is the number of seconds the replica is behind in replaying the transactions received from the source server. This metric is calculated from "Seconds_behind_Master" from the command "SHOW SLAVE STATUS" and is available for replica servers only. For more information, see "[Monitor replication latency](../how-to-troubleshoot-replication-latency.md)"| |Replica IO Status|replica_io_running|State|Replica IO Status indicates the state of [replication I/O thread](https://dev.mysql.com/doc/refman/8.0/en/replication-implementation-details.html). Metric value is 1 if the I/O thread is running and 0 if not.| |Replica SQL Status|replica_sql_running|State|Replica SQL Status indicates the state of [replication SQL thread](https://dev.mysql.com/doc/refman/8.0/en/replication-implementation-details.html). Metric value is 1 if the SQL thread is running and 0 if not.| |HA IO Status|ha_io_running|State|HA IO Status indicates the state of [HA replication](./concepts-high-availability.md). Metric value is 1 if the I/O thread is running and 0 if not.| |
mysql | How To Read Replicas Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-read-replicas-portal.md | To delete a source server from the Azure portal, use the following steps: ## Next steps - Learn more about [read replicas](concepts-read-replicas.md)-- You can also monitor the replication latency by following the steps mentioned [here](../single-server/how-to-troubleshoot-replication-latency.md#monitoring-replication-latency).-- To troubleshoot high replication latency observed in Metrics, visit the [link](../single-server/how-to-troubleshoot-replication-latency.md#common-scenarios-for-high-replication-latency).+- You can also monitor the replication latency by following the steps mentioned [here](../how-to-troubleshoot-replication-latency.md). +- To troubleshoot high replication latency observed in Metrics, visit the [link](../how-to-troubleshoot-replication-latency.md#common-scenarios-for-high-replication-latency). |
mysql | How To Create Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/how-to-create-users.md | |
mysql | How To Troubleshoot Replication Latency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/how-to-troubleshoot-replication-latency.md | + + Title: Troubleshoot replication latency - Azure Database for MySQL - Flexible Server +description: Learn how to troubleshoot replication latency by using Azure Database for MySQL - Flexible Server read replicas. +keywords: mysql, troubleshoot, replication latency in seconds +++++ Last updated : 06/20/2022+++# Troubleshoot replication latency in Azure Database for MySQL - flexible Server +++++The [read replica](concepts-read-replicas.md) feature allows you to replicate data from an Azure Database for MySQL server to a read-only replica server. You can scale out workloads by routing read and reporting queries from the application to replica servers. This setup reduces the pressure on the source server. It also improves overall performance and latency of the application as it scales. ++Replicas are updated asynchronously by using the MySQL engine's native binary log (binlog) file position-based replication technology. For more information, see [MySQL binlog file position-based replication configuration overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html). ++The replication lag on the secondary read replicas depends several factors. These factors include but aren't limited to: ++- Network latency. +- Transaction volume on the source server. +- Compute tier of the source server and secondary read replica server. +- Queries running on the source server and secondary server. ++In this article, you learn how to troubleshoot replication latency in Azure Database for MySQL. You'll also understand some common causes of increased replication latency on replica servers. ++> [!NOTE] +> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. ++## Replication concepts ++When a binary log is enabled, the source server writes committed transactions into the binary log. The binary log is used for replication. It's turned on by default for all newly provisioned servers that support up to 16 TB of storage. On replica servers, two threads run on each replica server. One thread is the *IO thread*, and the other is the *SQL thread*: ++- The IO thread connects to the source server and requests updated binary logs. This thread receives the binary log updates. Those updates are saved on a replica server, in a local log called the *relay log*. +- The SQL thread reads the relay log and then applies the data changes on replica servers. ++## Monitoring replication latency ++Azure Database for MySQL provides the metric for replication lag in seconds in [Azure Monitor](concepts-monitoring.md). This metric is available only on read replica servers. It's calculated by the seconds_behind_master metric that's available in MySQL. ++To understand the cause of increased replication latency, connect to the replica server by using [MySQL Workbench](connect-workbench.md) or [Azure Cloud Shell](https://shell.azure.com). Then run following command. ++> [!NOTE] +> In your code, replace the example values with your replica server name and admin username. The admin username requires `@\<servername>` for Azure Database for MySQL. ++```azurecli-interactive +mysql --host=myreplicademoserver.mysql.database.azure.com --user=myadmin@mydemoserver -p +``` ++Here's how the experience looks in the Cloud Shell terminal: ++```bash +Requesting a Cloud Shell.Succeeded. +Connecting terminal... ++Welcome to Azure Cloud Shell ++Type "az" to use Azure CLI +Type "help" to learn about Cloud Shell ++user@Azure:~$mysql -h myreplicademoserver.mysql.database.azure.com -u myadmin@mydemoserver -p +Enter password: +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 64796 +Server version: 5.6.42.0 Source distribution ++Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved. ++Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. ++Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. +mysql> +``` ++In the same Cloud Shell terminal, run the following command: ++```sql +mysql> SHOW SLAVE STATUS; +``` ++Here's a typical output: + +> [!div class="mx-imgBorder"] +> :::image type="content" source="./media/how-to-troubleshoot-replication-latency/show-status.png" alt-text="Monitoring replication latency"::: ++The output contains numerous information. Normally, you need to focus on only the rows that the following table describes. ++|Metric|Description| +||| +|Slave_IO_State| Represents the current status of the IO thread. Normally, the status is "Waiting for master to send event" if the source (master) server is synchronizing. A status such as "Connecting to master" indicates that the replica lost the connection to the source server. Make sure the source server is running, or check to see whether a firewall is blocking the connection.| +|Master_Log_File| Represents the binary log file to which the source server is writing.| +|Read_Master_Log_Pos| Indicates where the source server is writing in the binary log file.| +|Relay_Master_Log_File| Represents the binary log file that the replica server is reading from the source server.| +|Slave_IO_Running| Indicates whether the IO thread is running. The value should be `Yes`. If the value is `NO`, then the replication is likely broken.| +|Slave_SQL_Running| Indicates whether the SQL thread is running. The value should be `Yes`. If the value is `NO`, then the replication is likely broken.| +|Exec_Master_Log_Pos| Indicates the position of the Relay_Master_Log_File that the replica is applying. If there's latency, then this position sequence should be smaller than Read_Master_Log_Pos.| +|Relay_Log_Space|Indicates the total combined size of all existing relay log files. You can check the upper limit size by querying `SHOW GLOBAL VARIABLES` like `relay_log_space_limit`.| +|Seconds_Behind_Master| Displays replication latency in seconds.| +|Last_IO_Errno|Displays the IO thread error code, if any. For more information about these codes, see the [MySQL server error message reference](https://dev.mysql.com/doc/mysql-errors/5.7/en/server-error-reference.html).| +|Last_IO_Error| Displays the IO thread error message, if any.| +|Last_SQL_Errno|Displays the SQL thread error code, if any. For more information about these codes, see the [MySQL server error message reference](https://dev.mysql.com/doc/mysql-errors/5.7/en/server-error-reference.html).| +|Last_SQL_Error|Displays the SQL thread error message, if any.| +|Slave_SQL_Running_State| Indicates the current SQL thread status. In this state, `System lock` is normal. It's also normal to see a status of `Waiting for dependent transaction to commit`. This status indicates that the replica is waiting for the source server to update committed transactions.| ++If Slave_IO_Running is `Yes` and Slave_SQL_Running is `Yes`, then the replication is running fine. ++Next, check Last_IO_Errno, Last_IO_Error, Last_SQL_Errno, and Last_SQL_Error. These fields display the error number and error message of the most-recent error that caused the SQL thread to stop. An error number of `0` and an empty message means there's no error. Investigate any nonzero error value by checking the error code in the [MySQL server error message reference](https://dev.mysql.com/doc/mysql-errors/5.7/en/server-error-reference.html). ++## Common scenarios for high replication latency ++The following sections address scenarios in which high replication latency is common. ++### Network latency or high CPU consumption on the source server ++If you see the following values, then replication latency is likely caused by high network latency or high CPU consumption on the source server. ++```bash +Slave_IO_State: Waiting for master to send event +Master_Log_File: the binary file sequence is larger then Relay_Master_Log_File, e.g. mysql-bin.00020 +Relay_Master_Log_File: the file sequence is smaller than Master_Log_File, e.g. mysql-bin.00010 +``` ++In this case, the IO thread is running and is waiting on the source server. The source server has already written to binary log file number 20. The replica has received only up to file number 10. The primary factors for high replication latency in this scenario are network speed or high CPU utilization on the source server. ++In Azure, network latency within a region can typically be measured milliseconds. Across regions, latency ranges from milliseconds to seconds. ++In most cases, the connection delay between IO threads and the source server is caused by high CPU utilization on the source server. The IO threads are processed slowly. You can detect this problem by using Azure Monitor to check CPU utilization and the number of concurrent connections on the source server. ++If you don't see high CPU utilization on the source server, the problem might be network latency. If network latency is suddenly abnormally high, check the [Azure status page](https://azure.status.microsoft/status) for known issues or outages. ++### Heavy bursts of transactions on the source server ++If you see the following values, then a heavy burst of transactions on the source server is likely causing the replication latency. ++```bash +Slave_IO_State: Waiting for the slave SQL thread to free enough relay log space +Master_Log_File: the binary file sequence is larger then Relay_Master_Log_File, e.g. mysql-bin.00020 +Relay_Master_Log_File: the file sequence is smaller then Master_Log_File, e.g. mysql-bin.00010 +``` ++The output shows that the replica can retrieve the binary log behind the source server. But the replica IO thread indicates that the relay log space is full already. ++Network speed isn't causing the delay. The replica is trying to catch up. But the updated binary log size exceeds the upper limit of the relay log space. ++To troubleshoot this issue, enable the [slow query log](concepts-server-logs.md) on the source server. Use slow query logs to identify long-running transactions on the source server. Then tune the identified queries to reduce the latency on the server. ++Replication latency of this sort is commonly caused by the data load on the source server. When source servers have weekly or monthly data loads, replication latency is unfortunately unavoidable. The replica servers eventually catch up after the data load on the source server finishes. ++### Slowness on the replica server ++If you observe the following values, then the problem might be on the replica server. ++```bash +Slave_IO_State: Waiting for master to send event +Master_Log_File: The binary log file sequence equals to Relay_Master_Log_File, e.g. mysql-bin.000191 +Read_Master_Log_Pos: The position of master server written to the above file is larger than Relay_Log_Pos, e.g. 103978138 +Relay_Master_Log_File: mysql-bin.000191 +Slave_IO_Running: Yes +Slave_SQL_Running: Yes +Exec_Master_Log_Pos: The position of slave reads from master binary log file is smaller than Read_Master_Log_Pos, e.g. 13468882 +Seconds_Behind_Master: There is latency and the value here is greater than 0 +``` ++In this scenario, the output shows that both the IO thread and the SQL thread are running well. The replica reads the same binary log file that the source server writes. However, some latency on the replica server reflects the same transaction from the source server. ++The following sections describe common causes of this kind of latency. ++#### No primary key or unique key on a table ++Azure Database for MySQL uses row-based replication. The source server writes events to the binary log, recording changes in individual table rows. The SQL thread then replicates those changes to the corresponding table rows on the replica server. When a table lacks a primary key or unique key, the SQL thread scans all rows in the target table to apply the changes. This scan can cause replication latency. ++In MySQL, the primary key is an associated index that ensures fast query performance because it can't include NULL values. If you use the InnoDB storage engine, the table data is physically organized to do ultra-fast lookups and sorts based on the primary key. ++We recommend that you add a primary key on tables in the source server before you create the replica server. Add primary keys on the source server and then re-create read replicas to help improve replication latency. ++Use the following query to find out which tables are missing a primary key on the source server: ++```sql +select tab.table_schema as database_name, tab.table_name +from information_schema.tables tab left join +information_schema.table_constraints tco +on tab.table_schema = tco.table_schema +and tab.table_name = tco.table_name +and tco.constraint_type = 'PRIMARY KEY' +where tco.constraint_type is null +and tab.table_schema not in('mysql', 'information_schema', 'performance_schema', 'sys') +and tab.table_type = 'BASE TABLE' +order by tab.table_schema, tab.table_name; +``` ++#### Long-running queries on the replica server ++The workload on the replica server can make the SQL thread lag behind the IO thread. Long-running queries on the replica server are one of the common causes of high replication latency. To troubleshoot this problem, enable the [slow query log](concepts-server-logs.md) on the replica server. ++Slow queries can increase resource consumption or slow down the server so that the replica can't catch up with the source server. In this scenario, tune the slow queries. Faster queries prevent blockage of the SQL thread and improve replication latency significantly. ++#### DDL queries on the source server ++On the source server, a data definition language (DDL) command like [`ALTER TABLE`](https://dev.mysql.com/doc/refman/5.7/en/alter-table.html) can take a long time. While the DDL command is running, thousands of other queries might be running in parallel on the source server. ++When the DDL is replicated, to ensure database consistency, the MySQL engine runs the DDL in a single replication thread. During this task, all other replicated queries are blocked and must wait until the DDL operation finishes on the replica server. Even online DDL operations cause this delay. DDL operations increase replication latency. ++If you enabled the [slow query log](concepts-server-logs.md) on the source server, you can detect this latency problem by checking for a DDL command that ran on the source server. Through index dropping, renaming, and creating, you can use the INPLACE algorithm for the ALTER TABLE. You might need to copy the table data and rebuild the table. ++Typically, concurrent DML is supported for the INPLACE algorithm. But you can briefly take an exclusive metadata lock on the table when you prepare and run the operation. So for the CREATE INDEX statement, you can use the clauses ALGORITHM and LOCK to influence the method for table copying and the level of concurrency for reading and writing. You can still prevent DML operations by adding a FULLTEXT index or SPATIAL index. ++The following example creates an index by using ALGORITHM and LOCK clauses. ++```sql +ALTER TABLE table_name ADD INDEX index_name (column), ALGORITHM=INPLACE, LOCK=NONE; +``` ++Unfortunately, for a DDL statement that requires a lock, you can't avoid replication latency. To reduce the potential effects, do these types of DDL operations during off-peak hours, for instance during the night. ++#### Downgraded replica server ++In Azure Database for MySQL, read replicas use the same server configuration as the source server. You can change the replica server configuration after it has been created. ++If the replica server is downgraded, the workload can consume more resources, which in turn can lead to replication latency. To detect this problem, use Azure Monitor to check the CPU and memory consumption of the replica server. ++In this scenario, we recommend that you keep the replica server's configuration at values equal to or greater than the values of the source server. This configuration allows the replica to keep up with the source server. ++#### Improving replication latency by tuning the source server parameters ++In Azure Database for MySQL, by default, replication is optimized to run with parallel threads on replicas. When high-concurrency workloads on the source server cause the replica server to fall behind, you can improve the replication latency by configuring the parameter binlog_group_commit_sync_delay on the source server. ++The binlog_group_commit_sync_delay parameter controls how many microseconds the binary log commit waits before synchronizing the binary log file. The benefit of this parameter is that instead of immediately applying every committed transaction, the source server sends the binary log updates in bulk. This delay reduces IO on the replica and helps improve performance. ++It might be useful to set the binlog_group_commit_sync_delay parameter to 1000 or so. Then monitor the replication latency. Set this parameter cautiously, and use it only for high-concurrency workloads. ++> [!IMPORTANT] +> In replica server, binlog_group_commit_sync_delay parameter is recommended to be 0. This is recommended because unlike source server, the replica server won't have high-concurrency and increasing the value for binlog_group_commit_sync_delay on replica server could inadvertently cause replication lag to increase. ++For low-concurrency workloads that include many singleton transactions, the binlog_group_commit_sync_delay setting can increase latency. Latency can increase because the IO thread waits for bulk binary log updates even if only a few transactions are committed. ++## Advanced Troubleshooting Options ++If using the show slave status command doesn't provide enough information to troubleshoot replication latency, try viewing these additional options for learning about which processes are active or waiting. ++### View the threads table ++The [`performance_schema.threads`](https://dev.mysql.com/doc/refman/5.7/en/performance-schema-threads-table.html) table shows the process state. A process with the state Waiting for lock_type lock indicates that thereΓÇÖs a lock on one of the tables, preventing the replication thread from updating the table. ++```sql +SELECT name, processlist_state, processlist_time FROM performance_schema.threads WHERE name LIKE '%slave%'; +``` ++For more information, see [General Thread States](https://dev.mysql.com/doc/refman/5.7/en/general-thread-states.html). ++### View the replication_connection_status table ++The performance_schema.replication_connection_status table shows the current status of the replication I/O thread that handles the replica's connection to the source, and it changes more frequently. The table contains values that vary during the connection. ++```sql +SELECT * FROM performance_schema.replication_connection_status; +``` ++### View the replication_applier_status_by_worker table ++The `performance_schema.replication_applier_status_by_worker` table shows the status of the worker threads, Last seen transaction along with last error number and message, which help you find the transaction having issue and identify the root cause. ++You can run the below commands in the Data-in replication to skip errors or transactions: ++`az_replication_skip_counter` ++or ++`az_replication_skip_gtid_transaction` ++```sql +SELECT * FROM performance_schema.replication_applier_status_by_worker; +``` ++### View the SHOW RELAYLOG EVENTS statement ++The `show relaylog events` statement shows the events in the relay log of a replica. ++┬╖ For GITD based replication (Read replica), the statement shows GTID transaction and binlog file and its position, you can use mysqlbinlog to get contents and statements being run. +┬╖ For MySQL binlog position replication (used for Data-in replication), it shows statements being run, which will help to know on which table transactions are being run ++### Check the InnoDB Standard Monitor and Lock Monitor Output ++You can also try checking the InnoDB Standard Monitor and Lock Monitor Output to help in resolving locks and deadlocks and minimize replication lag. The Lock Monitor is the same as the Standard Monitor except that it includes additional lock information. To view this additional lock and deadlock information, run the show engine innodb status\G command. ++## Next steps ++Check out the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html). |
mysql | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/policy-reference.md | description: Lists Azure Policy built-in policy definitions for Azure Database f --++ Last updated 02/21/2023 |
network-watcher | Diagnose Network Security Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-network-security-rules.md | + + Title: Check security rules using NSG diagnostics ++description: Use NSG diagnostics to check if traffic is allowed or denied by network security group rules or Azure Virtual Network Manager security admin rules. ++++ Last updated : 05/31/2023++++# Diagnose network security rules ++You can use [network security groups](../virtual-network/network-security-groups-overview.md) to filter and control inbound and outbound network traffic to and from your Azure resources. You can also use [Azure Virtual Network Manager](../virtual-network-manager/overview.md) to apply admin security rules to your Azure resources to control network traffic. ++In this article, you learn how to use Azure Network Watcher [NSG diagnostics](network-watcher-network-configuration-diagnostics-overview.md) to check and troubleshoot security rules applied to your Azure traffic. NSG diagnostics checks if the traffic is allowed or denied by applied security rules. ++The example in this article shows you how a misconfigured network security group can prevent you from using Azure Bastion to connect to a virtual machine. ++## Prerequisites ++# [**Portal**](#tab/portal) ++- An Azure account with an active subscription. [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). ++- Sign in to the [Azure portal](https://portal.azure.com/?WT.mc_id=A261C142F) with your Azure account. ++# [**PowerShell**](#tab/powershell) ++- An Azure account with an active subscription. [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). ++- Azure Cloud Shell or Azure PowerShell. ++ The steps in this article run the Azure PowerShell cmdlets interactively in [Azure Cloud Shell](/azure/cloud-shell/overview). To run the commands in the Cloud Shell, select **Open Cloudshell** at the upper-right corner of a code block. Select **Copy** to copy the code and then paste it into Cloud Shell to run it. You can also run the Cloud Shell from within the Azure portal. ++ You can also [install Azure PowerShell locally](/powershell/azure/install-azure-powershell) to run the cmdlets. If you run PowerShell locally, sign in to Azure using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. ++# [**Azure CLI**](#tab/cli) ++- An Azure account with an active subscription. [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). ++- Azure Cloud Shell or Azure CLI. + + The steps in this article run the Azure CLI commands interactively in [Azure Cloud Shell](/azure/cloud-shell/overview). To run the commands in the Cloud Shell, select **Open Cloudshell** at the upper-right corner of a code block. Select **Copy** to copy the code, and paste it into Cloud Shell to run it. You can also run the Cloud Shell from within the Azure portal. + + You can also [install Azure CLI locally](/cli/azure/install-azure-cli) to run the commands. If you run Azure CLI locally, sign in to Azure using the [az login](/cli/azure/reference-index#az-login) command. ++++## Create a virtual network and a Bastion host ++In this section, you create a virtual network with two subnets and an Azure Bastion host. The first subnet is used for the virtual machine, and the second subnet is used for the Bastion host. You also create a network security group and apply it to the first subnet. ++# [**Portal**](#tab/portal) ++1. In the search box at the top of the portal, enter *virtual networks*. Select **Virtual networks** in the search results. ++ :::image type="content" source="./media/diagnose-network-security-rules/portal-search.png" alt-text="Screenshot shows how to search for virtual networks in the Azure portal." lightbox="./media/diagnose-network-security-rules/portal-search.png"::: ++1. Select **+ Create**. In **Create virtual network**, enter or select the following values in the **Basics** tab: ++ | Setting | Value | + | | | + | **Project Details** | | + | Subscription | Select your Azure subscription. | + | Resource Group | Select **Create new**. </br> Enter *myResourceGroup* in **Name**. </br> Select **OK**. | + | **Instance details** | | + | Virtual network name | Enter *myVNet*. | + | Region | Select **(US) East US**. | ++1. Select the **Security** tab, or select the **Next** button at the bottom of the page. ++1. Under **Azure Bastion**, select **Enable Azure Bastion** and accept the default values: ++ | Setting | Value | + | | | + | Azure Bastion host name | **myVNet-Bastion**. | + | Azure Bastion public IP Address | **(New) myVNet-bastion-publicIpAddress**. | ++1. Select the **IP Addresses** tab, or select **Next** button at the bottom of the page. ++1. Accept the default IP address space **10.0.0.0/16** and edit the default subnet by selecting the pencil icon. In the **Edit subnet** page, enter the following values: ++ | Setting | Value | + | | | + | **Subnet details** | | + | Name | Enter *mySubnet*. | + | **Security** | | + | Network security group | Select **Create new**. </br> Enter *mySubnet-nsg* in **Name**. </br> Select **OK**. | ++1. Select the **Review + create**. ++1. Review the settings, and then select **Create**. ++# [**PowerShell**](#tab/powershell) ++1. Create a resource group using [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). An Azure resource group is a logical container into which Azure resources are deployed and managed. ++ ```azurepowershell-interactive + # Create a resource group. + New-AzResourceGroup -Name 'myResourceGroup' -Location 'eastus' + ``` ++1. Create a default network security group using [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup). ++ ```azurepowershell-interactive + # Create a network security group. + $networkSecurityGroup = New-AzNetworkSecurityGroup -Name 'mySubnet-nsg' -ResourceGroupName 'myResourceGroup' -Location 'eastus' + ``` ++1. Create a subnet configuration for the virtual machine subnet and the Bastion host subnet using [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig). ++ ```azurepowershell-interactive + # Create subnets configuration. + $firstSubnet = New-AzVirtualNetworkSubnetConfig -Name 'mySubnet' -AddressPrefix '10.0.0.0/24' -NetworkSecurityGroup $networkSecurityGroup + $secondSubnet = New-AzVirtualNetworkSubnetConfig -Name 'AzureBastionSubnet' -AddressPrefix '10.0.1.0/26' + ``` ++1. Create a virtual network using [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork). ++ ```azurepowershell-interactive + # Create a virtual network. + $vnet = New-AzVirtualNetwork -Name 'myVNet' -ResourceGroupName 'myResourceGroup' -Location 'eastus' -AddressPrefix '10.0.0.0/16' -Subnet $firstSubnet, $secondSubnet + ``` ++1. Create the public IP address resource required for the Bastion host using [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress). ++ ```azurepowershell-interactive + # Create a public IP address for Azure Bastion. + New-AzPublicIpAddress -ResourceGroupName 'myResourceGroup' -Name 'myBastionIp' -Location 'eastus' -AllocationMethod 'Static' -Sku 'Standard' + ``` ++1. Create the Bastion host using [New-AzBastion](/powershell/module/az.network/new-azbastion). ++ ```azurepowershell-interactive + # Create an Azure Bastion host. + New-AzBastion -ResourceGroupName 'myResourceGroup' -Name 'myVNet-Bastion' --PublicIpAddressRgName 'myResourceGroup' -PublicIpAddressName 'myBastionIp' -VirtualNetwork $vnet + ``` ++# [**Azure CLI**](#tab/cli) ++1. Create a resource group using [az group create](/cli/azure/group#az-group-create). An Azure resource group is a logical container into which Azure resources are deployed and managed. ++ ```azurecli-interactive + # Create a resource group. + az group create --name 'myResourceGroup' --location 'eastus' + ``` ++1. Create a default network security group using [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create). ++ ```azurecli-interactive + # Create a network security group. + az network nsg create --name 'mySubnet-nsg' --resource-group 'myResourceGroup' --location 'eastus' + ``` ++1. Create a virtual network using [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create). ++ ```azurecli-interactive + az network vnet create --resource-group 'myResourceGroup' --name 'myVNet' --subnet-name 'mySubnet' --subnet-prefixes 10.0.0.0/24 --network-security-group 'mySubnet-nsg' + ``` ++1. Create a subnet for Azure Bastion using [az network vnet subnet create](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-create). ++ ```azurecli-interactive + # Create AzureBastionSubnet. + az network vnet subnet create --name 'AzureBastionSubnet' --resource-group 'myResourceGroup' --vnet-name 'myVNet' --address-prefixes '10.0.1.0/26' + ``` ++1. Create a public IP address for the Bastion host using [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create). ++ ```azurecli-interactive + # Create a public IP address resource. + az network public-ip create --resource-group 'myResourceGroup' --name 'myBastionIp' --sku Standard + ``` ++1. Create a Bastion host using [az network bastion create](/cli/azure/network/bastion#az-network-bastion-create). ++ ```azurecli-interactive + az network bastion create --name 'myVNet-Bastion' --public-ip-address 'myBastionIp' --resource-group 'myResourceGroup' --vnet-name 'myVNet' + ``` ++++## Create a virtual machine ++In this section, you create a virtual machine and a network security group applied to its network interface. ++# [**Portal**](#tab/portal) ++1. In the search box at the top of the portal, enter *virtual machines*. Select **Virtual machines** in the search results. ++1. Select **+ Create** and then select **Azure virtual machine**. ++1. In **Create a virtual machine**, enter or select the following values in the **Basics** tab: ++ | Setting | Value | + | | | + | **Project Details** | | + | Subscription | Select your Azure subscription. | + | Resource Group | Select **myResourceGroup**. | + | **Instance details** | | + | Virtual machine name | Enter *myVM*. | + | Region | Select **(US) East US**. | + | Availability Options | Select **No infrastructure redundancy required**. | + | Security type | Select **Standard**. | + | Image | Select **Windows Server 2022 Datacenter: Azure Edition - x64 Gen2**. | + | Size | Choose a size or leave the default setting. | + | **Administrator account** | | + | Username | Enter a username. | + | Password | Enter a password. | + | Confirm password | Reenter password. | ++1. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**. ++1. In the Networking tab, enter or select the following values: ++ | Setting | Value | + | | | + | **Network interface** | | + | Virtual network | Select **myVNet**. | + | Subnet | Select **default**. | + | Public IP | Select **None**. | + | NIC network security group | Select **Basic**. | + | Public inbound ports | Select **None**. | ++1. Select **Review + create**. ++1. Review the settings, and then select **Create**. ++# [**PowerShell**](#tab/powershell) ++1. Create a default network security group using [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup). ++ ```azurepowershell-interactive + # Create + New-AzNetworkSecurityGroup -Name 'myVM-nsg' -ResourceGroupName 'myResourceGroup' -Location eastus + ``` ++1. Create a virtual machine using [New-AzVM](/powershell/module/az.compute/new-azvm). When prompted, enter a username and password. ++ ```azurepowershell-interactive + # Create a virtual machine. + New-AzVm -ResourceGroupName 'myResourceGroup' -Name 'myVM' -Location 'eastus' -VirtualNetworkName 'myVNet' -SubnetName 'mySubnet' -SecurityGroupName 'myVM-nsg' -ImageName 'MicrosoftWindowsServer:WindowsServer:2022-Datacenter-azure-edition:latest' + ``` ++# [**Azure CLI**](#tab/cli) ++1. Create a default network security group using [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create). ++ ```azurecli-interactive + # Create a network security group for the network interface of the virtual machine. + az network nsg create --name 'myVM-nsg' --resource-group 'myResourceGroup' --location 'eastus' + ``` ++1. Create a virtual machine using [az vm create](/cli/azure/vm#az-vm-create). When prompted, enter a username and password. ++ ```azurecli-interactive + # Create a virtual machine. + az vm create --resource-group 'myResourceGroup' --name 'myVM' --location 'eastus' --vnet-name 'myVNet' --subnet 'mySubnet' --public-ip-address '' --nsg 'myVM-nsg' --image 'Win2022AzureEditionCore' + ``` ++++## Add a security rule to the network security group ++In this section, you add a security rule to the network security group associated with the network interface of **myVM**. The rule denies any inbound traffic from the virtual network. ++# [**Portal**](#tab/portal) ++1. In the search box at the top of the portal, enter *network security groups*. Select **Network security groups** in the search results. ++1. From the list of network security groups, select **myVM-nsg**. ++1. Under **Settings**, select **Inbound security rules**. ++1. Select **+ Add**. In the Networking tab, enter or select the following values: ++ | Setting | Value | + | | | + | Source | Select **Service Tag**. | + | Source service tag | Select **VirtualNetwork**. | + | Source port ranges | Enter *. | + | Destination | Select **Any**. | + | Service | Select **Custom**. | + | Destination port ranges | Enter *. | + | Protocol | Select **Any**. | + | Action | Select **Deny**. | + | Priority | Enter *1000*. | + | Name | Enter *DenyVnetInBound*. | ++1. Select **Add**. ++# [**PowerShell**](#tab/powershell) ++Use [Add-AzNetworkSecurityRuleConfig](/powershell/module/az.network/add-aznetworksecurityruleconfig) to create a security rule that denies traffic from the virtual network. Then use [Set-AzNetworkSecurityGroup](/powershell/module/az.network/set-aznetworksecuritygroup) to update the network security group with the new security rule. ++```azurepowershell-interactive +# Place the network security group configuration into a variable. +$networkSecurityGroup = Get-AzNetworkSecurityGroup -Name 'myVM-nsg' -ResourceGroupName 'myResourceGroup' +# Create a security rule. +Add-AzNetworkSecurityRuleConfig -Name 'DenyVnetInBound' -NetworkSecurityGroup $networkSecurityGroup ` +-Access 'Deny' -Protocol '*' -Direction 'Inbound' -Priority '1000' ` +-SourceAddressPrefix 'virtualNetwork' -SourcePortRange '*' -DestinationAddressPrefix '*' -DestinationPortRange '*' +# Updates the network security group. +Set-AzNetworkSecurityGroup -NetworkSecurityGroup $networkSecurityGroup +``` ++# [**Azure CLI**](#tab/cli) ++Use [az network nsg rule create](/cli/azure/network/nsg/rule#az-network-nsg-rule-create) to add to the network security group a security rule that denies traffic from the virtual network. ++```azurecli-interactive +# Add a security rule to the network security group. +az network nsg rule create --name 'DenyVnetInBound' --resource-group 'myResourceGroup' --nsg-name 'myVM-nsg' --priority '1000' \ +--access 'Deny' --protocol '*' --direction 'Inbound' --source-address-prefixes 'virtualNetwork' --source-port-ranges '*' \ +--destination-address-prefixes '*' --destination-port-ranges '*' +``` ++++> [!NOTE] +> The **VirtualNetwork** service tag represents the address space of the virtual network, all connected on-premises address spaces, peered virtual networks, virtual networks connected to a virtual network gateway, the virtual IP address of the host, and address prefixes used on user-defined routes. For more information, see [Service tags](../virtual-network/service-tags-overview.md). ++## Check security rules applied to a virtual machine traffic ++Use NSG diagnostics to check the security rules applied to the traffic originated from the Bastion subnet to the virtual machine. ++# [**Portal**](#tab/portal) ++1. In the search box at the top of the portal, search for and select **Network Watcher**. ++1. Under **Network diagnostic tools**, select **NSG diagnostics**. ++1. On the **NSG diagnostics** page, enter or select the following values: ++ | Setting | Value | + | - | | + | Subscription | Select the Azure subscription that has the virtual machine that you want to test the connection with. | + | Resource group | Select the resource group that has the virtual machine that you want to test the connection with. | + | Supported resource type | Select **Virtual machine**. | + | Resource | Select the virtual machine that you want to test the connection with. | + | Protocol | Select **TCP**. Other available options are: **Any**, **UDP** and **ICMP**. | + | Direction | Select **Inbound**. Other available option is: **Outbound**. | + | Source type | Select **IPv4 address/CIDR**. Other available option is: **Service Tag**. | + | IPv4 address/CIDR | Enter *10.0.1.0/26*, which is the IP address range of the Bastion subnet. Acceptable values are: single IP address, multiple IP addresses, single IP prefix, multiple IP prefixes. | + | Destination IP address | Enter *10.0.0.4*, which is the IP address of **myVM**. | + | Destination port | Enter * to include all ports. | ++ :::image type="content" source="./media/diagnose-network-security-rules/nsg-diagnostics-vm-values.png" alt-text="Screenshot showing required values for NSG diagnostics to test inbound connections to a virtual machine in the Azure portal." lightbox="./media/diagnose-network-security-rules/nsg-diagnostics-vm-values.png"::: ++1. Select **Check** to run the test. Once NSG diagnostics completes checking all security rules, it displays the result. ++ :::image type="content" source="./media/diagnose-network-security-rules/nsg-diagnostics-vm-test-result-denied.png" alt-text="Screenshot showing the result of inbound connections to the virtual machine as Denied." lightbox="./media/diagnose-network-security-rules/nsg-diagnostics-vm-test-result-denied.png"::: ++ The result shows that there are three security rules assessed for the inbound connection from the Bastion subnet: ++ - **GlobalRules**: this security admin rule is applied at the virtual network level using Azure Virtual Network Manage. The rule allows inbound TCP traffic from the Bastion subnet to the virtual machine. + - **mySubnet-nsg**: this network security group is applied at the subnet level (subnet of the virtual machine). The rule allows inbound TCP traffic from the Bastion subnet to the virtual machine. + - **myVM-nsg**: this network security group is applied at the network interface (NIC) level. The rule denies inbound TCP traffic from the Bastion subnet to the virtual machine. ++1. Select **myVM-nsg** to see details about the security rules that this network security group has and which rule denied the traffic. ++ :::image type="content" source="./media/diagnose-network-security-rules/nsg-diagnostics-vm-test-result-denied-details.png" alt-text="Screenshot showing the details of the network security group that denied the traffic to the virtual machine." lightbox="./media/diagnose-network-security-rules/nsg-diagnostics-vm-test-result-denied-details.png"::: ++ In **myVM-nsg** network security group, the security rule **DenyVnetInBound** denies any traffic coming from the address space of **VirtualNetwork** service tag to the virtual machine. The Bastion host uses IP addresses from **10.0.1.0/26**, which are included **VirtualNetwork** service tag, to connect to the virtual machine. Therefore, the connection from the Bastion host is denied by the **DenyVnetInBound** security rule. ++# [**PowerShell**](#tab/powershell) ++Use [Invoke-AzNetworkWatcherNetworkConfigurationDiagnostic](/powershell/module/az.network/invoke-aznetworkwatchernetworkconfigurationdiagnostic) to start the NSG diagnostics session. ++```azurepowershell-interactive +# Create a profile for the diagnostic session. +$profile = New-AzNetworkWatcherNetworkConfigurationDiagnosticProfile -Direction Inbound -Protocol Tcp -Source 10.0.1.0/26 -Destination 10.0.0.4 -DestinationPort * +# Place the virtual machine configuration into a variable. +$vm = Get-AzVM -Name 'myVM' -ResourceGroupName 'myResourceGroup' +# Start the the NSG diagnostics session. +Invoke-AzNetworkWatcherNetworkConfigurationDiagnostic -Location 'eastus' -TargetResourceId $vm.Id -Profile $profile | Format-List +``` ++Output similar to the following example output is returned: ++```output +Results : {Microsoft.Azure.Commands.Network.Models.PSNetworkConfigurationDiagnosticResult} +ResultsText : [ + { + "Profile": { + "Direction": "Inbound", + "Protocol": "Tcp", + "Source": "10.0.1.0/26", + "Destination": "10.0.0.4", + "DestinationPort": "*" + }, + "NetworkSecurityGroupResult": { + "SecurityRuleAccessResult": "Deny", + "EvaluatedNetworkSecurityGroups": [ + { + "NetworkSecurityGroupId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/NetworkAdmin/providers/Microsoft.Network/networkManagers/GlobalRules", + "MatchedRule": { + "RuleName": "VirtualNetwork", + "Action": "Allow" + }, + "RulesEvaluationResult": [ + { + "Name": "VirtualNetwork", + "ProtocolMatched": true, + "SourceMatched": true, + "SourcePortMatched": true, + "DestinationMatched": true, + "DestinationPortMatched": true + } + ] + }, + { + "NetworkSecurityGroupId": "/subscriptions/abcdef01-2345-6789-0abc-def012345678/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkSecurityGroups/mySubnet-nsg", + "MatchedRule": { + "RuleName": "DefaultRule_AllowVnetInBound", + "Action": "Allow" + }, + "RulesEvaluationResult": [ + { + "Name": "DefaultRule_AllowVnetInBound", + "ProtocolMatched": true, + "SourceMatched": true, + "SourcePortMatched": true, + "DestinationMatched": true, + "DestinationPortMatched": true + } + ] + }, + { + "NetworkSecurityGroupId": "/subscriptions/abcdef01-2345-6789-0abc-def012345678/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkSecurityGroups/myVM-nsg", + "MatchedRule": { + "RuleName": "UserRule_DenyVnetInBound", + "Action": "Deny" + }, + "RulesEvaluationResult": [ + { + "Name": "UserRule_DenyVnetInBound", + "ProtocolMatched": true, + "SourceMatched": true, + "SourcePortMatched": true, + "DestinationMatched": true, + "DestinationPortMatched": true + } + ] + } + ] + } + } + ] +``` ++The result shows that there are three security rules assessed for the inbound connection from the Bastion subnet: ++- **GlobalRules**: this security admin rule is applied at the virtual network level using Azure Virtual Network Manage. The rule allows inbound TCP traffic from the Bastion subnet to the virtual machine. +- **mySubnet-nsg**: this network security group is applied at the subnet level (subnet of the virtual machine). The rule allows inbound TCP traffic from the Bastion subnet to the virtual machine. +- **myVM-nsg**: this network security group is applied at the network interface (NIC) level. The rule denies inbound TCP traffic from the Bastion subnet to the virtual machine. ++In **myVM-nsg** network security group, the security rule **DenyVnetInBound** denies any traffic coming from the address space of **VirtualNetwork** service tag to the virtual machine. The Bastion host uses IP addresses from **10.0.1.0/26**, which are included **VirtualNetwork** service tag, to connect to the virtual machine. Therefore, the connection from the Bastion host is denied by the **DenyVnetInBound** security rule. +# [**Azure CLI**](#tab/cli) ++Use [az network watcher run-configuration-diagnostic](/cli/azure/network/watcher#az-network-watcher-run-configuration-diagnostic) to start the NSG diagnostics session. ++```azurecli-interactive +# Start the the NSG diagnostics session. +az network watcher run-configuration-diagnostic --resource 'myVM' --resource-group 'myResourceGroup' --resource-type 'virtualMachines' --direction 'Inbound' --protocol 'TCP' --source '10.0.1.0/26' --destination '10.0.0.4' --port '*' +``` ++Output similar to the following example output is returned: ++```output +{ + "results": [ + { + "networkSecurityGroupResult": { + "evaluatedNetworkSecurityGroups": [ + { + "appliedTo": "/subscriptions/abcdef01-2345-6789-0abc-def012345678/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVNet", + "matchedRule": { + "action": "Allow", + "ruleName": "VirtualNetwork" + }, + "networkSecurityGroupId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/NetworkAdmin/providers/Microsoft.Network/networkManagers/GlobalRules", + "rul |